74 Commits

Author SHA1 Message Date
Austin Alvarado
a627e69e46 putting a pin in it 2024-01-19 01:37:54 +00:00
Valentin Tolmer
bd0a58b476 server: clean up the attributes, relax the substring filter conditions
This consolidates both user and group attributes in their map_{user,group}_attribute as the only point of parsing. It adds support for custom attribute filters for groups, and makes a SubString filter on an unknown attribute resolve to just false.
2024-01-17 23:44:25 +01:00
dependabot[bot]
4adb636d53 build(deps): bump actions/cache from 3 to 4
Bumps [actions/cache](https://github.com/actions/cache) from 3 to 4.
- [Release notes](https://github.com/actions/cache/releases)
- [Changelog](https://github.com/actions/cache/blob/main/RELEASES.md)
- [Commits](https://github.com/actions/cache/compare/v3...v4)

---
updated-dependencies:
- dependency-name: actions/cache
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-01-17 22:11:09 +01:00
Valentin Tolmer
6f905b1ca9 server: update ldap3_proto dependency
This will fix the issue with some unhandled controls, this time for sure
2024-01-16 17:52:15 +01:00
Valentin Tolmer
2ea17c04ba server: Move the definition of UserId down to lldap_auth 2024-01-15 23:48:59 +01:00
Valentin Tolmer
10609b25e9 docs: Misc updates
Deprecate key_file in favor of key_seed, add a script to generate the secrets
2024-01-14 22:57:10 +01:00
Valentin Tolmer
9f8364ca1a server: Fix private key reset functionality 2024-01-14 22:54:13 +01:00
Valentin Tolmer
56078c0b47 docs: add lldap-cli references, improve README 2024-01-13 22:53:12 +01:00
Valentin Tolmer
8b7852bf1c chore: clippy warnings 2024-01-13 18:32:58 +01:00
Valentin Tolmer
c4be7f5b6f server: Serialize attribute values when searching
This should fix #763 and allow filtering by custom attribute values.
2024-01-13 13:37:46 +01:00
Valentin Tolmer
337101edea server: update ldap3_proto dependency
This will fix the issue with some unhandled controls
2024-01-08 16:10:11 +01:00
Valentin Tolmer
dc140f1675 server: exit with non-zero code when running into errors starting 2024-01-06 00:43:41 +01:00
Roman
f74f88f0c0 example_configs: Add grocy 2024-01-03 21:46:14 +01:00
Valentin Tolmer
708d927e90 server: add a unique index to the memberships 2024-01-03 12:40:24 +01:00
Valentin Tolmer
0d48b7f8c9 server: add support for entryDN 2023-12-31 08:27:25 +01:00
Valentin Tolmer
f2b1e73929 server: Add a check for a changing private key
This checks that the private key used to encode the passwords has not
changed since last successful startup, leading to a corruption of all
the passwords. Lots of common scenario are covered, with various
combinations of key in a file or from a seed, set in the config file or
in an env variable or through CLI, and so on.
2023-12-29 15:37:52 +01:00
Dedy Martadinata S
997119cdcf switch up build steps (#776)
* switch up build steps

* also swith the buildx
2023-12-29 00:23:57 +07:00
ddiawara
a147085a2f example_configs: add Dovecot configuration for docker-mailserver
---------

Co-authored-by: Dedy Martadinata S <dedyms@proton.me>
2023-12-28 11:26:37 +01:00
Dedy Martadinata S
f363ff9437 docker: Add a rootless container
New images with "-rootless" tags will automatically get released on the docker registry.
2023-12-28 11:22:20 +01:00
Haoyu Xu
b6e6269956 example_configs: make the zitadel doc more comprehensive
fixed `Userbase` attribute; added `Preferred username attribute`; added `Automatic creation`
2023-12-25 18:48:07 +01:00
Valentin Tolmer
ff0ea51121 server: Add an option to force reset the admin password 2023-12-22 08:27:35 +01:00
Haoyu Xu
9ac96e8c6e example_configs: add support for admins and local users in homeassistant 2023-12-19 22:36:00 +01:00
Haoyu Xu
63f802648f example_configs: Add zitadel 2023-12-19 22:11:21 +01:00
Valentin Tolmer
1aba962cd3 readme: Fix block quote 2023-12-19 13:42:07 +01:00
Dedy Martadinata S
06697a5305 readme: Add installation from package 2023-12-19 13:34:26 +01:00
Sematre
5a5d5b1d0e example_configs: Add GitLab 2023-12-17 22:46:02 +01:00
Cherryblue
2e0d65e665 example_configs: Update seafile.md for v11
Updating the guide for Seafile v11+, to mention the differences.
2023-12-16 09:08:30 +01:00
Valentin Tolmer
2c54ad895d chore: clippy 2023-12-15 23:37:25 +01:00
Valentin Tolmer
272c84c574 server: make attributes names, group names and emails case insensitive
In addition, group names and emails keep their casing
2023-12-15 23:21:22 +01:00
dependabot[bot]
71d37b9e5e build(deps): bump actions/download-artifact from 3 to 4
Bumps [actions/download-artifact](https://github.com/actions/download-artifact) from 3 to 4.
- [Release notes](https://github.com/actions/download-artifact/releases)
- [Commits](https://github.com/actions/download-artifact/compare/v3...v4)

---
updated-dependencies:
- dependency-name: actions/download-artifact
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-12-14 22:08:22 +01:00
dependabot[bot]
c55e0f3bcf build(deps): bump actions/upload-artifact from 3 to 4
Bumps [actions/upload-artifact](https://github.com/actions/upload-artifact) from 3 to 4.
- [Release notes](https://github.com/actions/upload-artifact/releases)
- [Commits](https://github.com/actions/upload-artifact/compare/v3...v4)

---
updated-dependencies:
- dependency-name: actions/upload-artifact
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-12-14 21:55:41 +01:00
Nicholas Malcolm
f2946e6cf6 docs: Fix the Bootstrap script skipping similar name groups
Existing logic used jq's contain which confusingly will do partial string matches. For example a group named "media_admin" will be created then "media" will be skipped saying it already exists.
2023-12-12 04:22:28 +01:00
jakob42
f3e2f8c52d example_configs: Add Kasm configuration example 2023-12-11 10:53:53 +01:00
MinerSebas
70d85524db app: make it possible to serve lldap behind a sub-path 2023-12-07 18:21:49 +01:00
Mohit Raj
ec0737c58a docs(config): clarify docker networking setup 2023-12-03 15:10:51 +01:00
Yevhen Kolomeiko
33f50d13a2 example_configs(bootstrap.sh): Add check is user in group 2023-11-30 11:06:16 +01:00
null
5cd4499328 chore(docs): update jenkins.md
Use the correct Manager DN.
2023-11-23 05:59:35 +01:00
Christian Medel
a65ad14349 example_configs: Add Mastodon and Traccar 2023-11-20 22:05:06 +01:00
Zepmann
2ca5e9e720 Readme: add AUR installation instructions 2023-11-17 07:16:59 +01:00
Valentin Tolmer
4f72153bd4 server: Disallow deleting hardcoded attributes 2023-11-05 16:19:04 +01:00
Valentin Tolmer
829c3f2bb1 server: Prevent regular users from modifying non-editable attributes 2023-11-05 16:06:45 +01:00
themartinslife
a6481dde56 example_configs: add a Jenkins config 2023-11-04 15:41:36 +01:00
Yevhen Kolomeiko
35146ac904 example_configs: Add bootstrap script 2023-11-02 20:49:15 +01:00
Cherryblue
d488802e68 example_configs: Fix display name in wikijs.md
Correction of the display name alias for it to work with wikijs.
2023-11-01 10:23:06 +01:00
nitnelave
927c79bb55 github: Create issue templates 2023-10-30 22:58:52 +01:00
Valentin Tolmer
3b6f24dd17 github: Add CONTRIBUTING guidelines 2023-10-30 22:40:56 +01:00
Valentin Tolmer
8ab900dfce github: update postgres migration sed to handle jwt_storage 2023-10-30 21:59:48 +01:00
Valentin Tolmer
504227eb13 server: Add JWTs to the DB
Otherwise, logging out doesn't actually blacklist the JWT
2023-10-30 21:59:48 +01:00
Hobbabobba
1b97435853 example_configs: Add a working admin user for dokuwiki (#720) 2023-10-30 13:38:13 +01:00
Valentin Tolmer
1fddd87470 server: Simplify RequestFilter's TryInto 2023-10-30 11:31:04 +01:00
dependabot[bot]
af8277dbbd build(deps): bump docker/login-action from 2 to 3
Bumps [docker/login-action](https://github.com/docker/login-action) from 2 to 3.
- [Release notes](https://github.com/docker/login-action/releases)
- [Commits](https://github.com/docker/login-action/compare/v2...v3)

---
updated-dependencies:
- dependency-name: docker/login-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-10-30 10:05:11 +01:00
dependabot[bot]
609d0ddb7d build(deps): bump docker/metadata-action from 4 to 5
Bumps [docker/metadata-action](https://github.com/docker/metadata-action) from 4 to 5.
- [Release notes](https://github.com/docker/metadata-action/releases)
- [Upgrade guide](https://github.com/docker/metadata-action/blob/master/UPGRADE.md)
- [Commits](https://github.com/docker/metadata-action/compare/v4...v5)

---
updated-dependencies:
- dependency-name: docker/metadata-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-10-26 13:34:27 +02:00
dependabot[bot]
3df42ae707 build(deps): bump docker/setup-qemu-action from 2 to 3
Bumps [docker/setup-qemu-action](https://github.com/docker/setup-qemu-action) from 2 to 3.
- [Release notes](https://github.com/docker/setup-qemu-action/releases)
- [Commits](https://github.com/docker/setup-qemu-action/compare/v2...v3)

---
updated-dependencies:
- dependency-name: docker/setup-qemu-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-10-26 08:25:13 +02:00
dependabot[bot]
8f9520b640 build(deps): bump actions/checkout from 4.0.0 to 4.1.1 (#716)
Bumps [actions/checkout](https://github.com/actions/checkout) from 4.0.0 to 4.1.1.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v4.0.0...v4.1.1)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-10-26 04:19:27 +02:00
dependabot[bot]
7c9f61e2eb build(deps): bump docker/build-push-action from 4 to 5 (#677)
Bumps [docker/build-push-action](https://github.com/docker/build-push-action) from 4 to 5.
- [Release notes](https://github.com/docker/build-push-action/releases)
- [Commits](https://github.com/docker/build-push-action/compare/v4...v5)

---
updated-dependencies:
- dependency-name: docker/build-push-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-10-26 03:42:52 +02:00
dependabot[bot]
5275af8f96 build(deps): bump docker/setup-buildx-action from 2 to 3 (#676)
Bumps [docker/setup-buildx-action](https://github.com/docker/setup-buildx-action) from 2 to 3.
- [Release notes](https://github.com/docker/setup-buildx-action/releases)
- [Commits](https://github.com/docker/setup-buildx-action/compare/v2...v3)

---
updated-dependencies:
- dependency-name: docker/setup-buildx-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: nitnelave <valentin@tolmer.fr>
2023-10-25 19:55:03 +02:00
Andrew Roberts
0db41f6278 docker: add date-based tagging to matrix jobs 2023-10-23 08:34:24 +02:00
Florian
4574538c76 clippy: fix warning for unwrap_or_default 2023-10-22 20:34:31 +02:00
Florian
9d5714ee0b chore: update repository references 2023-10-22 19:59:36 +02:00
Valentin Tolmer
c6ecf8d58a server: Add graphql support for setting attributes 2023-10-22 16:34:15 +02:00
MI3Guy
9e88bfe6b4 docs: fix primary key in PG migration
When importing data, Postgres doesn't update the auto increment counter for the groups. Creating a group after an import would fail due to duplicate IDs. This manually sets the ID to the max of the IDs + 1.
2023-10-09 16:35:52 +02:00
Simon Broeng Jensen
5bd81780b3 server: Add basic support for Paged Results Control (RFC 2696)
This implements rudimentary support for the Paged
Results Control.

No actual pagination is performed, and we ignore
any requests for specific window sizes for paginated
results.

Instead, the full list of search results is returned
for any searches, and a control is added to the
SearchResultsDone message, informing the client that
there is no further results available.
2023-10-06 13:52:05 +02:00
Simon Broeng Jensen
4fd71ff02f example_configs: Add Apereo CAS Server 2023-10-04 15:02:19 +02:00
dependabot[bot]
f0046692b8 build(deps): bump webpki from 0.22.1 to 0.22.2
Bumps [webpki](https://github.com/briansmith/webpki) from 0.22.1 to 0.22.2.
- [Commits](https://github.com/briansmith/webpki/commits)

---
updated-dependencies:
- dependency-name: webpki
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-10-04 02:19:09 +02:00
Valentin Tolmer
439fde434b server: Add graphql support for creating/deleting attributes 2023-10-04 02:07:04 +02:00
Valentin Tolmer
2a5fd01439 server: add support for creating a group with attributes 2023-09-29 02:31:20 +02:00
Valentin Tolmer
2c398d0e8e server: Add domain support for creating/deleting attributes 2023-09-29 02:31:20 +02:00
Valentin Tolmer
93e9985a81 server: rename SchemaBackendHandler -> ReadSchemaBackendHandler 2023-09-29 02:31:20 +02:00
stuart938503
ed3be02384 lldap_set_password: Add option to bypass password requirements 2023-09-28 22:39:50 +02:00
Valentin Tolmer
3fadfb1944 server: add support for creating a user with attributes 2023-09-25 01:57:24 +02:00
Valentin Tolmer
81204dcee5 server: add support for updating user attributes 2023-09-25 01:57:24 +02:00
Valentin Tolmer
39a75b2c35 server: read custom attributes from LDAP 2023-09-15 15:26:18 +02:00
Valentin Tolmer
8e1515c27b version: bump to 0.5.1-alpha 2023-09-15 00:52:33 +02:00
Valentin Tolmer
ddfd719884 readme: Update references to nitnelave/lldap to lldap/lldap 2023-09-15 00:28:01 +02:00
98 changed files with 5400 additions and 1271 deletions

29
.github/ISSUE_TEMPLATE/bug_report.md vendored Normal file
View File

@@ -0,0 +1,29 @@
---
name: Bug report
about: Create a report to help us improve
title: "[BUG]"
labels: bug
assignees: ''
---
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Logs**
If applicable, add logs to explain the problem.
LLDAP should be started in verbose mode (`LLDAP_VERBOSE=true` env variable, or `verbose = true` in the config). Include the logs in triple-backtick "```"
If integrating with another service, please add its configuration (paste it or screenshot it) as well as any useful logs or screenshots (showing the error, for instance).
**Additional context**
Add any other context about the problem here.

View File

@@ -0,0 +1,20 @@
---
name: Feature request
about: Suggest an idea for this project
title: "[FEATURE REQUEST]"
labels: enhancement
assignees: ''
---
**Motivation**
Why do you want the feature? What problem do you have, what use cases would it enable?
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered. You can include workarounds that are currently possible.
**Additional context**
Add any other context or screenshots about the feature request here.

View File

@@ -0,0 +1,25 @@
---
name: Integration request
about: Request for integration with a service
title: "[INTEGRATION]"
labels: integration
assignees: ''
---
**Checklist**
- [ ] Check if there is already an [example config](https://github.com/lldap/lldap/tree/main/example_configs) for it.
- [ ] Try to figure out the configuration values for the new service yourself.
- You can use other example configs for inspiration.
- If you're having trouble, you can ask on [Discord](https://discord.gg/h5PEdRMNyP) or create an issue.
- If you succeed, make sure to contribute an example configuration, or a configuration guide.
- If you hit a block because of an unimplemented feature, create an issue.
**Description of the service**
Quick summary of what the service is and how it's using LDAP. Link to the service's documentation on configuring LDAP.
**What you've tried**
A sample configuration that you've tried.
**What's not working**
Error logs, error screenshots, features that are not working, missing features.

View File

@@ -1,72 +1,6 @@
FROM debian:bullseye AS lldap
ARG DEBIAN_FRONTEND=noninteractive
ARG TARGETPLATFORM
RUN apt update && apt install -y wget
WORKDIR /dim
COPY bin/ bin/
COPY web/ web/
RUN mkdir -p target/
RUN mkdir -p /lldap/app
RUN if [ "${TARGETPLATFORM}" = "linux/amd64" ]; then \
mv bin/x86_64-unknown-linux-musl-lldap-bin/lldap target/lldap && \
mv bin/x86_64-unknown-linux-musl-lldap_migration_tool-bin/lldap_migration_tool target/lldap_migration_tool && \
mv bin/x86_64-unknown-linux-musl-lldap_set_password-bin/lldap_set_password target/lldap_set_password && \
chmod +x target/lldap && \
chmod +x target/lldap_migration_tool && \
chmod +x target/lldap_set_password && \
ls -la target/ . && \
pwd \
; fi
RUN if [ "${TARGETPLATFORM}" = "linux/arm64" ]; then \
mv bin/aarch64-unknown-linux-musl-lldap-bin/lldap target/lldap && \
mv bin/aarch64-unknown-linux-musl-lldap_migration_tool-bin/lldap_migration_tool target/lldap_migration_tool && \
mv bin/aarch64-unknown-linux-musl-lldap_set_password-bin/lldap_set_password target/lldap_set_password && \
chmod +x target/lldap && \
chmod +x target/lldap_migration_tool && \
chmod +x target/lldap_set_password && \
ls -la target/ . && \
pwd \
; fi
RUN if [ "${TARGETPLATFORM}" = "linux/arm/v7" ]; then \
mv bin/armv7-unknown-linux-musleabihf-lldap-bin/lldap target/lldap && \
mv bin/armv7-unknown-linux-musleabihf-lldap_migration_tool-bin/lldap_migration_tool target/lldap_migration_tool && \
mv bin/armv7-unknown-linux-musleabihf-lldap_set_password-bin/lldap_set_password target/lldap_set_password && \
chmod +x target/lldap && \
chmod +x target/lldap_migration_tool && \
chmod +x target/lldap_set_password && \
ls -la target/ . && \
pwd \
; fi
# Web and App dir
COPY docker-entrypoint.sh /docker-entrypoint.sh
COPY lldap_config.docker_template.toml /lldap/
COPY web/index_local.html web/index.html
RUN cp target/lldap /lldap/ && \
cp target/lldap_migration_tool /lldap/ && \
cp target/lldap_set_password /lldap/ && \
cp -R web/index.html \
web/pkg \
web/static \
/lldap/app/
WORKDIR /lldap
RUN set -x \
&& for file in $(cat /lldap/app/static/libraries.txt); do wget -P app/static "$file"; done \
&& for file in $(cat /lldap/app/static/fonts/fonts.txt); do wget -P app/static/fonts "$file"; done \
&& chmod a+r -R .
FROM alpine:3.16
WORKDIR /app
ENV UID=1000
ENV GID=1000
ENV USER=lldap
ENV GOSU_VERSION 1.14
# Fetch gosu from git
FROM localhost:5000/lldap/lldap:alpine-base
# Taken directly from https://github.com/tianon/gosu/blob/master/INSTALL.md
ENV GOSU_VERSION 1.17
RUN set -eux; \
\
apk add --no-cache --virtual .gosu-deps \
@@ -83,7 +17,7 @@ RUN set -eux; \
export GNUPGHOME="$(mktemp -d)"; \
gpg --batch --keyserver hkps://keys.openpgp.org --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4; \
gpg --batch --verify /usr/local/bin/gosu.asc /usr/local/bin/gosu; \
command -v gpgconf && gpgconf --kill all || :; \
gpgconf --kill all; \
rm -rf "$GNUPGHOME" /usr/local/bin/gosu.asc; \
\
# clean up fetch dependencies
@@ -93,22 +27,4 @@ RUN set -eux; \
# verify that the binary works
gosu --version; \
gosu nobody true
RUN apk add --no-cache tini ca-certificates bash tzdata && \
addgroup -g $GID $USER && \
adduser \
--disabled-password \
--gecos "" \
--home "$(pwd)" \
--ingroup "$USER" \
--no-create-home \
--uid "$UID" \
"$USER" && \
mkdir -p /data && \
chown $USER:$USER /data
COPY --from=lldap --chown=$USER:$USER /lldap /app
COPY --from=lldap --chown=$USER:$USER /docker-entrypoint.sh /docker-entrypoint.sh
VOLUME ["/data"]
WORKDIR /app
ENTRYPOINT ["tini", "--", "/docker-entrypoint.sh"]
CMD ["run", "--config-file", "/data/lldap_config.toml"]
HEALTHCHECK CMD ["/app/lldap", "healthcheck", "--config-file", "/data/lldap_config.toml"]
COPY --chown=$USER:$USER docker-entrypoint.sh /docker-entrypoint.sh

View File

@@ -0,0 +1,84 @@
FROM debian:bullseye AS lldap
ARG DEBIAN_FRONTEND=noninteractive
ARG TARGETPLATFORM
RUN apt update && apt install -y wget
WORKDIR /dim
COPY bin/ bin/
COPY web/ web/
RUN mkdir -p target/
RUN mkdir -p /lldap/app
RUN if [ "${TARGETPLATFORM}" = "linux/amd64" ]; then \
mv bin/x86_64-unknown-linux-musl-lldap-bin/lldap target/lldap && \
mv bin/x86_64-unknown-linux-musl-lldap_migration_tool-bin/lldap_migration_tool target/lldap_migration_tool && \
mv bin/x86_64-unknown-linux-musl-lldap_set_password-bin/lldap_set_password target/lldap_set_password && \
chmod +x target/lldap && \
chmod +x target/lldap_migration_tool && \
chmod +x target/lldap_set_password && \
ls -la target/ . && \
pwd \
; fi
RUN if [ "${TARGETPLATFORM}" = "linux/arm64" ]; then \
mv bin/aarch64-unknown-linux-musl-lldap-bin/lldap target/lldap && \
mv bin/aarch64-unknown-linux-musl-lldap_migration_tool-bin/lldap_migration_tool target/lldap_migration_tool && \
mv bin/aarch64-unknown-linux-musl-lldap_set_password-bin/lldap_set_password target/lldap_set_password && \
chmod +x target/lldap && \
chmod +x target/lldap_migration_tool && \
chmod +x target/lldap_set_password && \
ls -la target/ . && \
pwd \
; fi
RUN if [ "${TARGETPLATFORM}" = "linux/arm/v7" ]; then \
mv bin/armv7-unknown-linux-musleabihf-lldap-bin/lldap target/lldap && \
mv bin/armv7-unknown-linux-musleabihf-lldap_migration_tool-bin/lldap_migration_tool target/lldap_migration_tool && \
mv bin/armv7-unknown-linux-musleabihf-lldap_set_password-bin/lldap_set_password target/lldap_set_password && \
chmod +x target/lldap && \
chmod +x target/lldap_migration_tool && \
chmod +x target/lldap_set_password && \
ls -la target/ . && \
pwd \
; fi
# Web and App dir
COPY lldap_config.docker_template.toml /lldap/
COPY web/index_local.html web/index.html
RUN cp target/lldap /lldap/ && \
cp target/lldap_migration_tool /lldap/ && \
cp target/lldap_set_password /lldap/ && \
cp -R web/index.html \
web/pkg \
web/static \
/lldap/app/
WORKDIR /lldap
RUN set -x \
&& for file in $(cat /lldap/app/static/libraries.txt); do wget -P app/static "$file"; done \
&& for file in $(cat /lldap/app/static/fonts/fonts.txt); do wget -P app/static/fonts "$file"; done \
&& chmod a+r -R .
FROM alpine:3.16
WORKDIR /app
ENV UID=1000
ENV GID=1000
ENV USER=lldap
RUN apk add --no-cache tini ca-certificates bash tzdata && \
addgroup -g $GID $USER && \
adduser \
--disabled-password \
--gecos "" \
--home "$(pwd)" \
--ingroup "$USER" \
--no-create-home \
--uid "$UID" \
"$USER" && \
mkdir -p /data && \
chown $USER:$USER /data
COPY --from=lldap --chown=$USER:$USER /lldap /app
VOLUME ["/data"]
HEALTHCHECK CMD ["/app/lldap", "healthcheck", "--config-file", "/data/lldap_config.toml"]
WORKDIR /app
ENTRYPOINT ["tini", "--", "/docker-entrypoint.sh"]
CMD ["run", "--config-file", "/data/lldap_config.toml"]

View File

@@ -0,0 +1,3 @@
FROM localhost:5000/lldap/lldap:alpine-base
COPY --chown=$USER:$USER docker-entrypoint-rootless.sh /docker-entrypoint.sh
USER $USER

View File

@@ -1,79 +1,31 @@
FROM debian:bullseye AS lldap
ARG DEBIAN_FRONTEND=noninteractive
ARG TARGETPLATFORM
RUN apt update && apt install -y wget
WORKDIR /dim
COPY bin/ bin/
COPY web/ web/
RUN mkdir -p target/
RUN mkdir -p /lldap/app
RUN if [ "${TARGETPLATFORM}" = "linux/amd64" ]; then \
mv bin/x86_64-unknown-linux-musl-lldap-bin/lldap target/lldap && \
mv bin/x86_64-unknown-linux-musl-lldap_migration_tool-bin/lldap_migration_tool target/lldap_migration_tool && \
mv bin/x86_64-unknown-linux-musl-lldap_set_password-bin/lldap_set_password target/lldap_set_password && \
chmod +x target/lldap && \
chmod +x target/lldap_migration_tool && \
chmod +x target/lldap_set_password && \
ls -la target/ . && \
pwd \
; fi
RUN if [ "${TARGETPLATFORM}" = "linux/arm64" ]; then \
mv bin/aarch64-unknown-linux-musl-lldap-bin/lldap target/lldap && \
mv bin/aarch64-unknown-linux-musl-lldap_migration_tool-bin/lldap_migration_tool target/lldap_migration_tool && \
mv bin/aarch64-unknown-linux-musl-lldap_set_password-bin/lldap_set_password target/lldap_set_password && \
chmod +x target/lldap && \
chmod +x target/lldap_migration_tool && \
chmod +x target/lldap_set_password && \
ls -la target/ . && \
pwd \
; fi
RUN if [ "${TARGETPLATFORM}" = "linux/arm/v7" ]; then \
mv bin/armv7-unknown-linux-musleabihf-lldap-bin/lldap target/lldap && \
mv bin/armv7-unknown-linux-musleabihf-lldap_migration_tool-bin/lldap_migration_tool target/lldap_migration_tool && \
mv bin/armv7-unknown-linux-musleabihf-lldap_set_password-bin/lldap_set_password target/lldap_set_password && \
chmod +x target/lldap && \
chmod +x target/lldap_migration_tool && \
chmod +x target/lldap_set_password && \
ls -la target/ . && \
pwd \
; fi
# Web and App dir
COPY docker-entrypoint.sh /docker-entrypoint.sh
COPY lldap_config.docker_template.toml /lldap/
COPY web/index_local.html web/index.html
RUN cp target/lldap /lldap/ && \
cp target/lldap_migration_tool /lldap/ && \
cp target/lldap_set_password /lldap/ && \
cp -R web/index.html \
web/pkg \
web/static \
/lldap/app/
WORKDIR /lldap
RUN set -x \
&& for file in $(cat /lldap/app/static/libraries.txt); do wget -P app/static "$file"; done \
&& for file in $(cat /lldap/app/static/fonts/fonts.txt); do wget -P app/static/fonts "$file"; done \
&& chmod a+r -R .
FROM debian:bullseye-slim
ENV UID=1000
ENV GID=1000
ENV USER=lldap
RUN apt update && \
apt install -y --no-install-recommends tini openssl ca-certificates gosu tzdata && \
apt clean && \
rm -rf /var/lib/apt/lists/* && \
groupadd -g $GID $USER && useradd --system -m -g $USER --uid $UID $USER && \
mkdir -p /data && chown $USER:$USER /data
COPY --from=lldap --chown=$USER:$USER /lldap /app
COPY --from=lldap --chown=$USER:$USER /docker-entrypoint.sh /docker-entrypoint.sh
VOLUME ["/data"]
WORKDIR /app
ENTRYPOINT ["tini", "--", "/docker-entrypoint.sh"]
CMD ["run", "--config-file", "/data/lldap_config.toml"]
HEALTHCHECK CMD ["/app/lldap", "healthcheck", "--config-file", "/data/lldap_config.toml"]
FROM localhost:5000/lldap/lldap:debian-base
# Taken directly from https://github.com/tianon/gosu/blob/master/INSTALL.md
ENV GOSU_VERSION 1.17
RUN set -eux; \
# save list of currently installed packages for later so we can clean up
savedAptMark="$(apt-mark showmanual)"; \
apt-get update; \
apt-get install -y --no-install-recommends ca-certificates gnupg wget; \
rm -rf /var/lib/apt/lists/*; \
\
dpkgArch="$(dpkg --print-architecture | awk -F- '{ print $NF }')"; \
wget -O /usr/local/bin/gosu "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$dpkgArch"; \
wget -O /usr/local/bin/gosu.asc "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$dpkgArch.asc"; \
\
# verify the signature
export GNUPGHOME="$(mktemp -d)"; \
gpg --batch --keyserver hkps://keys.openpgp.org --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4; \
gpg --batch --verify /usr/local/bin/gosu.asc /usr/local/bin/gosu; \
gpgconf --kill all; \
rm -rf "$GNUPGHOME" /usr/local/bin/gosu.asc; \
\
# clean up fetch dependencies
apt-mark auto '.*' > /dev/null; \
[ -z "$savedAptMark" ] || apt-mark manual $savedAptMark; \
apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false; \
\
chmod +x /usr/local/bin/gosu; \
# verify that the binary works
gosu --version; \
gosu nobody true
COPY --chown=$USER:$USER docker-entrypoint.sh /docker-entrypoint.sh

View File

@@ -0,0 +1,79 @@
FROM debian:bullseye AS lldap
ARG DEBIAN_FRONTEND=noninteractive
ARG TARGETPLATFORM
RUN apt update && apt install -y wget
WORKDIR /dim
COPY bin/ bin/
COPY web/ web/
RUN mkdir -p target/
RUN mkdir -p /lldap/app
RUN if [ "${TARGETPLATFORM}" = "linux/amd64" ]; then \
mv bin/x86_64-unknown-linux-musl-lldap-bin/lldap target/lldap && \
mv bin/x86_64-unknown-linux-musl-lldap_migration_tool-bin/lldap_migration_tool target/lldap_migration_tool && \
mv bin/x86_64-unknown-linux-musl-lldap_set_password-bin/lldap_set_password target/lldap_set_password && \
chmod +x target/lldap && \
chmod +x target/lldap_migration_tool && \
chmod +x target/lldap_set_password && \
ls -la target/ . && \
pwd \
; fi
RUN if [ "${TARGETPLATFORM}" = "linux/arm64" ]; then \
mv bin/aarch64-unknown-linux-musl-lldap-bin/lldap target/lldap && \
mv bin/aarch64-unknown-linux-musl-lldap_migration_tool-bin/lldap_migration_tool target/lldap_migration_tool && \
mv bin/aarch64-unknown-linux-musl-lldap_set_password-bin/lldap_set_password target/lldap_set_password && \
chmod +x target/lldap && \
chmod +x target/lldap_migration_tool && \
chmod +x target/lldap_set_password && \
ls -la target/ . && \
pwd \
; fi
RUN if [ "${TARGETPLATFORM}" = "linux/arm/v7" ]; then \
mv bin/armv7-unknown-linux-musleabihf-lldap-bin/lldap target/lldap && \
mv bin/armv7-unknown-linux-musleabihf-lldap_migration_tool-bin/lldap_migration_tool target/lldap_migration_tool && \
mv bin/armv7-unknown-linux-musleabihf-lldap_set_password-bin/lldap_set_password target/lldap_set_password && \
chmod +x target/lldap && \
chmod +x target/lldap_migration_tool && \
chmod +x target/lldap_set_password && \
ls -la target/ . && \
pwd \
; fi
# Web and App dir
COPY docker-entrypoint.sh /docker-entrypoint.sh
COPY lldap_config.docker_template.toml /lldap/
COPY web/index_local.html web/index.html
RUN cp target/lldap /lldap/ && \
cp target/lldap_migration_tool /lldap/ && \
cp target/lldap_set_password /lldap/ && \
cp -R web/index.html \
web/pkg \
web/static \
/lldap/app/
WORKDIR /lldap
RUN set -x \
&& for file in $(cat /lldap/app/static/libraries.txt); do wget -P app/static "$file"; done \
&& for file in $(cat /lldap/app/static/fonts/fonts.txt); do wget -P app/static/fonts "$file"; done \
&& chmod a+r -R .
FROM debian:bullseye-slim
ENV UID=1000
ENV GID=1000
ENV USER=lldap
RUN apt update && \
apt install -y --no-install-recommends tini openssl ca-certificates tzdata && \
apt clean && \
rm -rf /var/lib/apt/lists/* && \
groupadd -g $GID $USER && useradd --system -m -g $USER --uid $UID $USER && \
mkdir -p /data && chown $USER:$USER /data
COPY --from=lldap --chown=$USER:$USER /lldap /app
COPY --from=lldap --chown=$USER:$USER /docker-entrypoint.sh /docker-entrypoint.sh
VOLUME ["/data"]
WORKDIR /app
ENTRYPOINT ["tini", "--", "/docker-entrypoint.sh"]
CMD ["run", "--config-file", "/data/lldap_config.toml"]
HEALTHCHECK CMD ["/app/lldap", "healthcheck", "--config-file", "/data/lldap_config.toml"]

View File

@@ -0,0 +1,3 @@
FROM localhost:5000/lldap/lldap:debian-base
COPY --chown=$USER:$USER docker-entrypoint-rootless.sh /docker-entrypoint.sh
USER $USER

View File

@@ -1,5 +1,5 @@
# Keep tracking base image
FROM rust:1.71-slim-bookworm
FROM rust:1.74-slim-bookworm
# Set needed env path
ENV PATH="/opt/armv7l-linux-musleabihf-cross/:/opt/armv7l-linux-musleabihf-cross/bin/:/opt/aarch64-linux-musl-cross/:/opt/aarch64-linux-musl-cross/bin/:/opt/x86_64-linux-musl-cross/:/opt/x86_64-linux-musl-cross/bin/:$PATH"

View File

@@ -87,8 +87,8 @@ jobs:
image: lldap/rust-dev:latest
steps:
- name: Checkout repository
uses: actions/checkout@v4.0.0
- uses: actions/cache@v3
uses: actions/checkout@v4.1.1
- uses: actions/cache@v4
with:
path: |
/usr/local/cargo/bin
@@ -110,7 +110,7 @@ jobs:
- name: Check build path
run: ls -al app/
- name: Upload ui artifacts
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: ui
path: app/
@@ -132,8 +132,8 @@ jobs:
CARGO_HOME: ${GITHUB_WORKSPACE}/.cargo
steps:
- name: Checkout repository
uses: actions/checkout@v4.0.0
- uses: actions/cache@v3
uses: actions/checkout@v4.1.1
- uses: actions/cache@v4
with:
path: |
.cargo/bin
@@ -149,17 +149,17 @@ jobs:
- name: Check path
run: ls -al target/release
- name: Upload ${{ matrix.target}} lldap artifacts
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: ${{ matrix.target}}-lldap-bin
path: target/${{ matrix.target }}/release/lldap
- name: Upload ${{ matrix.target }} migration tool artifacts
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: ${{ matrix.target }}-lldap_migration_tool-bin
path: target/${{ matrix.target }}/release/lldap_migration_tool
- name: Upload ${{ matrix.target }} password tool artifacts
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: ${{ matrix.target }}-lldap_set_password-bin
path: target/${{ matrix.target }}/release/lldap_set_password
@@ -199,7 +199,7 @@ jobs:
steps:
- name: Download artifacts
uses: actions/download-artifact@v3
uses: actions/download-artifact@v4
with:
name: x86_64-unknown-linux-musl-lldap-bin
path: bin/
@@ -294,18 +294,18 @@ jobs:
steps:
- name: Checkout scripts
uses: actions/checkout@v4.0.0
uses: actions/checkout@v4.1.1
with:
sparse-checkout: 'scripts'
- name: Download LLDAP artifacts
uses: actions/download-artifact@v3
uses: actions/download-artifact@v4
with:
name: x86_64-unknown-linux-musl-lldap-bin
path: bin/
- name: Download LLDAP set password
uses: actions/download-artifact@v3
uses: actions/download-artifact@v4
with:
name: x86_64-unknown-linux-musl-lldap_set_password-bin
path: bin/
@@ -347,7 +347,7 @@ jobs:
- name: Export and Converting to Postgress
run: |
bash ./scripts/sqlite_dump_commands.sh | sqlite3 ./users.db > ./dump.sql
sed -i -r -e "s/X'([[:xdigit:]]+'[^'])/'\\\x\\1/g" -e ":a; s/(INSERT INTO user_attribute_schema\(.*\) VALUES\(.*),1([^']*\);)$/\1,true\2/; s/(INSERT INTO user_attribute_schema\(.*\) VALUES\(.*),0([^']*\);)$/\1,false\2/; ta" -e '1s/^/BEGIN;\n/' -e '$aCOMMIT;' ./dump.sql
sed -i -r -e "s/X'([[:xdigit:]]+'[^'])/'\\\x\\1/g" -e ":a; s/(INSERT INTO (user_attribute_schema|jwt_storage)\(.*\) VALUES\(.*),1([^']*\);)$/\1,true\3/; s/(INSERT INTO (user_attribute_schema|jwt_storage)\(.*\) VALUES\(.*),0([^']*\);)$/\1,false\3/; ta" -e '1s/^/BEGIN;\n/' -e '$aCOMMIT;' ./dump.sql
- name: Create schema on postgres
run: |
@@ -434,6 +434,9 @@ jobs:
- name: Test Dummy User MySQL
run: ldapsearch -H ldap://localhost:3893 -LLL -D "uid=dummyuser,ou=people,dc=example,dc=com" -w 'dummypassword' -s "One" -b "ou=people,dc=example,dc=com"
########################################
#### BUILD BASE IMAGE ##################
########################################
build-docker-image:
needs: [build-ui, build-bin]
name: Build Docker image
@@ -443,7 +446,7 @@ jobs:
container: ["debian","alpine"]
include:
- container: alpine
platforms: linux/amd64,linux/arm64
platforms: linux/amd64,linux/arm64,linux/arm/v7
tags: |
type=ref,event=pr
type=semver,pattern=v{{version}}
@@ -456,6 +459,8 @@ jobs:
type=raw,value=stable,enable=${{ startsWith(github.ref, 'refs/tags/v') }}
type=raw,value=stable,enable=${{ startsWith(github.ref, 'refs/tags/v') }},suffix=
type=raw,value=latest,enable={{ is_default_branch }},suffix=
type=raw,value={{ date 'YYYY-MM-DD' }},enable={{ is_default_branch }}
type=raw,value={{ date 'YYYY-MM-DD' }},enable={{ is_default_branch }},suffix=
- container: debian
platforms: linux/amd64,linux/arm64,linux/arm/v7
tags: |
@@ -465,31 +470,69 @@ jobs:
type=semver,pattern=v{{major}}.{{minor}}
type=raw,value=latest,enable={{ is_default_branch }}
type=raw,value=stable,enable=${{ startsWith(github.ref, 'refs/tags/v') }}
type=raw,value={{ date 'YYYY-MM-DD' }},enable={{ is_default_branch }}
services:
registry:
image: registry:2
ports:
- 5000:5000
permissions:
contents: read
packages: write
steps:
- name: Checkout repository
uses: actions/checkout@v4.0.0
uses: actions/checkout@v4.1.1
- name: Download all artifacts
uses: actions/download-artifact@v3
uses: actions/download-artifact@v4
with:
path: bin
- name: Download llap ui artifacts
uses: actions/download-artifact@v3
uses: actions/download-artifact@v4
with:
name: ui
path: web
- name: Setup QEMU
uses: docker/setup-qemu-action@v2
- uses: docker/setup-buildx-action@v2
uses: docker/setup-qemu-action@v3
- name: Setup buildx
uses: docker/setup-buildx-action@v3
with:
driver-opts: network=host
- name: Docker ${{ matrix.container }} meta
id: meta
uses: docker/metadata-action@v4
- name: Docker ${{ matrix.container }} Base meta
id: meta-base
uses: docker/metadata-action@v5
with:
# list of Docker images to use as base name for tags
images: |
localhost:5000/lldap/lldap
tags: ${{ matrix.container }}-base
- name: Build ${{ matrix.container }} Base Docker Image
uses: docker/build-push-action@v5
with:
context: .
# On PR will fail, force fully uncomment push: true, or docker image will fail for next steps
#push: ${{ github.event_name != 'pull_request' }}
push: true
platforms: ${{ matrix.platforms }}
file: ./.github/workflows/Dockerfile.ci.${{ matrix.container }}-base
tags: |
${{ steps.meta-base.outputs.tags }}
labels: ${{ steps.meta-base.outputs.labels }}
cache-from: type=gha,mode=max
cache-to: type=gha,mode=max
#####################################
#### build variants docker image ####
#####################################
- name: Docker ${{ matrix.container }}-rootless meta
id: meta-rootless
uses: docker/metadata-action@v5
with:
# list of Docker images to use as base name for tags
images: |
@@ -504,12 +547,48 @@ jobs:
# latest-alpine
# stable
# stable-alpine
# YYYY-MM-DD
# YYYY-MM-DD-alpine
#################
# vX-debian
# vX.Y-debian
# vX.Y.Z-debian
# latest-debian
# stable-debian
# YYYY-MM-DD-debian
#################
# Check matrix for tag list definition
flavor: |
latest=false
suffix=-${{ matrix.container }}-rootless
tags: ${{ matrix.tags }}
- name: Docker ${{ matrix.container }} meta
id: meta-standard
uses: docker/metadata-action@v5
with:
# list of Docker images to use as base name for tags
images: |
nitnelave/lldap
lldap/lldap
ghcr.io/lldap/lldap
# Wanted Docker tags
# vX-alpine
# vX.Y-alpine
# vX.Y.Z-alpine
# latest
# latest-alpine
# stable
# stable-alpine
# YYYY-MM-DD
# YYYY-MM-DD-alpine
#################
# vX-debian
# vX.Y-debian
# vX.Y.Z-debian
# latest-debian
# stable-debian
# YYYY-MM-DD-debian
#################
# Check matrix for tag list definition
flavor: |
@@ -520,33 +599,43 @@ jobs:
# Docker login to nitnelave/lldap and lldap/lldap
- name: Login to Nitnelave/LLDAP Docker Hub
if: github.event_name != 'pull_request'
uses: docker/login-action@v2
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Login to GitHub Container Registry
if: github.event_name != 'pull_request'
uses: docker/login-action@v2
uses: docker/login-action@v3
with:
registry: ghcr.io
username: nitnelave
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build ${{ matrix.container }}-rootless Docker Image
uses: docker/build-push-action@v5
with:
context: .
push: ${{ github.event_name != 'pull_request' }}
platforms: ${{ matrix.platforms }}
file: ./.github/workflows/Dockerfile.ci.${{ matrix.container }}-rootless
tags: |
${{ steps.meta-rootless.outputs.tags }}
labels: ${{ steps.meta-rootless.outputs.labels }}
cache-from: type=gha,mode=max
cache-to: type=gha,mode=max
########################################
#### docker image build ####
########################################
### This docker build always the last, due :latest tag pushed multiple times, for whatever variants may added in future add docker build above this
- name: Build ${{ matrix.container }} Docker Image
uses: docker/build-push-action@v4
uses: docker/build-push-action@v5
with:
context: .
push: ${{ github.event_name != 'pull_request' }}
platforms: ${{ matrix.platforms }}
file: ./.github/workflows/Dockerfile.ci.${{ matrix.container }}
tags: |
${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
${{ steps.meta-standard.outputs.tags }}
labels: ${{ steps.meta-standard.outputs.labels }}
cache-from: type=gha,mode=max
cache-to: type=gha,mode=max
@@ -578,7 +667,7 @@ jobs:
contents: write
steps:
- name: Download all artifacts
uses: actions/download-artifact@v3
uses: actions/download-artifact@v4
with:
path: bin/
- name: Check file
@@ -599,7 +688,7 @@ jobs:
chmod +x bin/*-lldap_set_password
- name: Download llap ui artifacts
uses: actions/download-artifact@v3
uses: actions/download-artifact@v4
with:
name: ui
path: web

View File

@@ -33,7 +33,7 @@ jobs:
steps:
- name: Checkout sources
uses: actions/checkout@v4.0.0
uses: actions/checkout@v4.1.1
- uses: Swatinem/rust-cache@v2
- name: Build
run: cargo build --verbose --workspace
@@ -52,7 +52,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout sources
uses: actions/checkout@v4.0.0
uses: actions/checkout@v4.1.1
- uses: Swatinem/rust-cache@v2
@@ -69,7 +69,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout sources
uses: actions/checkout@v4.0.0
uses: actions/checkout@v4.1.1
- uses: Swatinem/rust-cache@v2
@@ -88,7 +88,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout sources
uses: actions/checkout@v4.0.0
uses: actions/checkout@v4.1.1
- name: Install Rust
run: rustup toolchain install nightly --component llvm-tools-preview && rustup component add llvm-tools-preview --toolchain stable-x86_64-unknown-linux-gnu

97
CONTRIBUTING.md Normal file
View File

@@ -0,0 +1,97 @@
# How to contribute to LLDAP
## Did you find a bug?
- Make sure there isn't already an [issue](https://github.com/lldap/lldap/issues?q=is%3Aissue+is%3Aopen) for it.
- Check if the bug still happens with the `latest` docker image, or the `main` branch if you compile it yourself.
- [Create an issue](https://github.com/lldap/lldap/issues/new) on GitHub. What makes a great issue:
- A quick summary of the bug.
- Steps to reproduce.
- LLDAP _verbose_ logs when reproducing the bug. Verbose mode can be set through environment variables (`LLDAP_VERBOSE=true`) or in the config (`verbose = true`).
- What you expected to happen.
- What actually happened.
- Other notes (what you tried, why you think it's happening, ...).
## Are you requesting integration with a new service?
- Check if there is already an [example config](https://github.com/lldap/lldap/tree/main/example_configs) for it.
- Try to figure out the configuration values for the new service yourself.
- You can use other example configs for inspiration.
- If you're having trouble, you can ask on [Discord](https://discord.gg/h5PEdRMNyP)
- If you succeed, make sure to contribute an example configuration, or a configuration guide.
- If you hit a block because of an unimplemented feature, go to the next section.
## Are you asking for a new feature?
- Make sure there isn't already an [issue](https://github.com/lldap/lldap/issues?q=is%3Aissue+is%3Aopen) for it.
- [Create an issue](https://github.com/lldap/lldap/issues/new) on GitHub. What makes a great feature request:
- A quick summary of the feature.
- Motivation: what problem does the feature solve?
- Workarounds: what are the currently possible solutions to the problem, however bad?
## Do you want to work on a PR?
That's great! There are 2 main ways to contribute to the project: documentation and code.
### Documentation
The simplest way to contribute is to submit a configuration guide for a new
service: it can be an example configuration file, or a markdown guide
explaining the steps necessary to configure the service.
We also have some
[documentation](https://github.com/lldap/lldap/tree/main/docs) with more
advanced guides (scripting, migrations, ...) you can contribute to.
### Code
If you don't know what to start with, check out the
[good first issues](https://github.com/lldap/lldap/labels/good%20first%20issue).
Otherwise, if you want to fix a specific bug or implement a feature, make sure
to start by creating an issue for it (if it doesn't already exist). There, we
can discuss whether it would be likely to be accepted and consider design
issues. That will save you from going down a wrong path, creating an entire PR
before getting told that it doesn't align with the project or the design is
flawed!
Once we agree on what to do in the issue, you can start working on the PR. A good quality PR has:
- A description of the change.
- The format we use for both commit titles and PRs is:
`tag: Do the thing`
The tag can be: server, app, docker, example_configs, ... It's a broad category.
The rest of the title should be an imperative sentence (see for instance [Commit Message
Guidelines](https://gist.github.com/robertpainsi/b632364184e70900af4ab688decf6f53)).
- The PR should refer to the issue it's addressing (e.g. "Fix #123").
- Explain the _why_ of the change.
- But also the _how_.
- Highlight any potential flaw or limitation.
- The code change should be as small as possible while solving the problem.
- Don't try to code-golf to change fewer characters, but keep logically separate changes in
different PRs.
- Add tests if possible.
- The tests should highlight the original issue in case of a bug.
- Ideally, we can apply the tests without the rest of the change and they would fail. With the
change, they pass.
- In some areas, there is no test infrastructure in place (e.g. for frontend changes). In that
case, do some manual testing and include the results (logs for backend changes, screenshot of a
successful service integration, screenshot of the frontend change).
- For backend changes, the tests should cover a significant portion of the new code paths, or
everything if possible. You can also add more tests to cover existing code.
- Of course, make sure all the existing tests pass. This will be checked anyway in the GitHub CI.
### Workflow
We use [GitHub Flow](https://docs.github.com/en/get-started/quickstart/github-flow):
- Fork the repository.
- (Optional) Create a new branch, or just use `main` in your fork.
- Make your change.
- Create a PR.
- Address the comments by adding more commits to your branch (or to `main`).
- The PR gets merged (the commits get squashed to a single one).
- (Optional) You can delete your branch/fork.
## Reminder
We're all volunteers, so be kind to each other! And since we're doing that in our free time, some
things can take a longer than expected.

16
Cargo.lock generated
View File

@@ -1351,8 +1351,10 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4e56602b469b2201400dec66a66aec5a9b8761ee97cd1b8c96ab2483fcc16cc9"
dependencies = [
"atomic",
"parking_lot",
"pear",
"serde",
"tempfile",
"toml",
"uncased",
"version_check",
@@ -2364,9 +2366,9 @@ dependencies = [
[[package]]
name = "ldap3_proto"
version = "0.4.0"
version = "0.4.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "db993ebb4a1acda7ac25fa7e8609cff225a65f1f4a668e378eb252a1a6de433a"
checksum = "a29eca0a9fef365d6d376a1b262e269a17b1c8c6de2cee76618642cd3c923506"
dependencies = [
"base64 0.21.0",
"bytes",
@@ -2453,7 +2455,7 @@ checksum = "f051f77a7c8e6957c0696eac88f26b0117e54f52d3fc682ab19397a8812846a4"
[[package]]
name = "lldap"
version = "0.5.0"
version = "0.5.1-alpha"
dependencies = [
"actix",
"actix-files",
@@ -2473,6 +2475,7 @@ dependencies = [
"clap",
"cron",
"derive_builder",
"derive_more",
"figment",
"figment_file_provider_adapter",
"futures",
@@ -2528,7 +2531,7 @@ dependencies = [
[[package]]
name = "lldap_app"
version = "0.5.0"
version = "0.5.1-alpha"
dependencies = [
"anyhow",
"base64 0.13.1",
@@ -2569,6 +2572,7 @@ dependencies = [
"opaque-ke",
"rand 0.8.5",
"rust-argon2",
"sea-orm",
"serde",
"sha2 0.9.9",
"thiserror",
@@ -4893,9 +4897,9 @@ dependencies = [
[[package]]
name = "webpki"
version = "0.22.1"
version = "0.22.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f0e74f82d49d545ad128049b7e88f6576df2da6b02e9ce565c6f533be576957e"
checksum = "07ecc0cd7cac091bf682ec5efa18b1cff79d617b84181f38b3951dbe135f607f"
dependencies = [
"ring",
"untrusted",

147
README.md
View File

@@ -5,9 +5,9 @@
</p>
<p align="center">
<a href="https://github.com/nitnelave/lldap/actions/workflows/rust.yml?query=branch%3Amain">
<a href="https://github.com/lldap/lldap/actions/workflows/rust.yml?query=branch%3Amain">
<img
src="https://github.com/nitnelave/lldap/actions/workflows/rust.yml/badge.svg"
src="https://github.com/lldap/lldap/actions/workflows/rust.yml/badge.svg"
alt="Build"/>
</a>
<a href="https://discord.gg/h5PEdRMNyP">
@@ -37,14 +37,18 @@
- [Installation](#installation)
- [With Docker](#with-docker)
- [With Kubernetes](#with-kubernetes)
- [From a package repository](#from-a-package-repository)
- [From source](#from-source)
- [Backend](#backend)
- [Frontend](#frontend)
- [Cross-compilation](#cross-compilation)
- [Usage](#usage)
- [Recommended architecture](#recommended-architecture)
- [Client configuration](#client-configuration)
- [Compatible services](#compatible-services)
- [General configuration guide](#general-configuration-guide)
- [Sample client configurations](#sample-client-configurations)
- [Incompatible services](#incompatible-services)
- [Migrating from SQLite](#migrating-from-sqlite)
- [Comparisons with other services](#comparisons-with-other-services)
- [vs OpenLDAP](#vs-openldap)
@@ -61,7 +65,7 @@ many backends, from KeyCloak to Authelia to Nextcloud and
[more](#compatible-services)!
<img
src="https://raw.githubusercontent.com/nitnelave/lldap/master/screenshot.png"
src="https://raw.githubusercontent.com/lldap/lldap/master/screenshot.png"
alt="Screenshot of the user list page"
width="50%"
align="right"
@@ -94,9 +98,10 @@ MySQL/MariaDB or PostgreSQL.
### With Docker
The image is available at `nitnelave/lldap`. You should persist the `/data`
folder, which contains your configuration, the database and the private key
file.
The image is available at `lldap/lldap`. You should persist the `/data`
folder, which contains your configuration and the SQLite database (you can
remove this step if you use a different DB and configure with environment
variables only).
Configure the server by copying the `lldap_config.docker_template.toml` to
`/data/lldap_config.toml` and updating the configuration values (especially the
@@ -104,10 +109,12 @@ Configure the server by copying the `lldap_config.docker_template.toml` to
Environment variables should be prefixed with `LLDAP_` to override the
configuration.
If the `lldap_config.toml` doesn't exist when starting up, LLDAP will use default one. The default admin password is `password`, you can change the password later using the web interface.
If the `lldap_config.toml` doesn't exist when starting up, LLDAP will use
default one. The default admin password is `password`, you can change the
password later using the web interface.
Secrets can also be set through a file. The filename should be specified by the
variables `LLDAP_JWT_SECRET_FILE` or `LLDAP_LDAP_USER_PASS_FILE`, and the file
variables `LLDAP_JWT_SECRET_FILE` or `LLDAP_KEY_SEED_FILE`, and the file
contents are loaded into the respective configuration parameters. Note that
`_FILE` variables take precedence.
@@ -117,6 +124,7 @@ Example for docker compose:
- `:latest` tag image contains recently pushed code or feature tests, in which some instability can be expected.
- If `UID` and `GID` no defined LLDAP will use default `UID` and `GID` number `1000`.
- If no `TZ` is set, default `UTC` timezone will be used.
- You can generate the secrets by running `./generate_secrets.sh`
```yaml
version: "3"
@@ -127,10 +135,10 @@ volumes:
services:
lldap:
image: nitnelave/lldap:stable
image: lldap/lldap:stable
ports:
# For LDAP
- "3890:3890"
# For LDAP, not recommended to expose, see Usage section.
#- "3890:3890"
# For LDAPS (LDAP Over SSL), enable port if LLDAP_LDAPS_OPTIONS__ENABLED set true, look env below
#- "6360:6360"
# For the web front-end
@@ -144,7 +152,7 @@ services:
- GID=####
- TZ=####/####
- LLDAP_JWT_SECRET=REPLACE_WITH_RANDOM
- LLDAP_LDAP_USER_PASS=REPLACE_WITH_PASSWORD
- LLDAP_KEY_SEED=REPLACE_WITH_RANDOM
- LLDAP_LDAP_BASE_DN=dc=example,dc=com
# If using LDAPS, set enabled true and configure cert and key path
# - LLDAP_LDAPS_OPTIONS__ENABLED=true
@@ -162,6 +170,44 @@ front-end.
See https://github.com/Evantage-WS/lldap-kubernetes for a LLDAP deployment for Kubernetes
You can bootstrap your lldap instance (users, groups)
using [bootstrap.sh](example_configs/bootstrap/bootstrap.md#kubernetes-job).
It can be run by Argo CD for managing users in git-opt way, or as a one-shot job.
### From a package repository
**Do not open issues in this repository for problems with third-party
pre-built packages. Report issues downstream.**
Depending on the distribution you use, it might be possible to install lldap
from a package repository, officially supported by the distribution or
community contributed.
#### Debian, CentOS Fedora, OpenSUSE, Ubuntu
The package for these distributions can be found at [LLDAP OBS](https://software.opensuse.org//download.html?project=home%3AMasgalor%3ALLDAP&package=lldap).
- When using the distributed package, the default login is `admin/password`. You can change that from the web UI after starting the service.
#### Arch Linux
Arch Linux offers unofficial support through the [Arch User Repository
(AUR)](https://wiki.archlinux.org/title/Arch_User_Repository).
Available package descriptions in AUR are:
- [lldap](https://aur.archlinux.org/packages/lldap) - Builds the latest stable version.
- [lldap-bin](https://aur.archlinux.org/packages/lldap-bin) - Uses the latest
pre-compiled binaries from the [releases in this repository](https://github.com/lldap/lldap/releases).
This package is recommended if you want to run lldap on a system with
limited resources.
- [lldap-git](https://aur.archlinux.org/packages/lldap-git) - Builds the
latest main branch code.
The package descriptions can be used
[to create and install packages](https://wiki.archlinux.org/title/Arch_User_Repository#Getting_started).
Each package places lldap's configuration file at `/etc/lldap.toml` and offers
[systemd service](https://wiki.archlinux.org/title/systemd#Using_units)
`lldap.service` to (auto-)start and stop lldap.
### From source
#### Backend
@@ -183,15 +229,13 @@ just run `cargo run -- run` to run the server.
#### Frontend
To bring up the server, you'll need to compile the frontend. In addition to
`cargo`, you'll need:
- WASM-pack: `cargo install wasm-pack`
`cargo`, you'll need WASM-pack, which can be installed by running `cargo install wasm-pack`.
Then you can build the frontend files with
```shell
./app/build.sh
````
```
(you'll need to run this after every front-end change to update the WASM
package served).
@@ -226,6 +270,47 @@ You can then get the compiled server binary in
Raspberry Pi (or other target), with the folder structure maintained (`app`
files in an `app` folder next to the binary).
## Usage
The simplest way to use LLDAP is through the web front-end. There you can
create users, set passwords, add them to groups and so on. Users can also
connect to the web UI and change their information, or request a password reset
link (if you configured the SMTP client).
Creating and managing custom attributes is currently in Beta. It's not
supported in the Web UI. The recommended way is to use
[Zepmann/lldap-cli](https://github.com/Zepmann/lldap-cli), a
community-contributed CLI frontend.
LLDAP is also very scriptable, through its GraphQL API. See the
[Scripting](docs/scripting.md) docs for more info.
### Recommended architecture
If you are using containers, a sample architecture could look like this:
- A reverse proxy (e.g. nginx or Traefik)
- An authentication service (e.g. Authelia, Authentik or KeyCloak) connected to
LLDAP to provide authentication for non-authenticated services, or to provide
SSO with compatible ones.
- The LLDAP service, with the web port exposed to Traefik.
- The LDAP port doesn't need to be exposed, since only the other containers
will access it.
- You can also set up LDAPS if you want to expose the LDAP port to the
internet (not recommended) or for an extra layer of security in the
inter-container communication (though it's very much optional).
- The default LLDAP container starts up as root to fix up some files'
permissions before downgrading the privilege to the given user. However,
you can (should?) use the `*-rootless` version of the images to be able to
start directly as that user, once you got the permissions right. Just don't
forget to change from the `UID/GID` env vars to the `uid` docker-compose
field.
- Any other service that needs to connect to LLDAP for authentication (e.g.
NextCloud) can be added to a shared network with LLDAP. The finest
granularity is a network for each pair of LLDAP-service, but there are often
coarser granularities that make sense (e.g. a network for the \*arr stack and
LLDAP).
## Client configuration
### Compatible services
@@ -265,6 +350,7 @@ folder for help with:
- [Airsonic Advanced](example_configs/airsonic-advanced.md)
- [Apache Guacamole](example_configs/apacheguacamole.md)
- [Apereo CAS Server](example_configs/apereo_cas_server.md)
- [Authelia](example_configs/authelia_config.yml)
- [Authentik](example_configs/authentik.md)
- [Bookstack](example_configs/bookstack.env.example)
@@ -277,12 +363,18 @@ folder for help with:
- [Emby](example_configs/emby.md)
- [Ergo IRCd](example_configs/ergo.md)
- [Gitea](example_configs/gitea.md)
- [GitLab](example_configs/gitlab.md)
- [Grafana](example_configs/grafana_ldap_config.toml)
- [Grocy](example_configs/grocy.md)
- [Hedgedoc](example_configs/hedgedoc.md)
- [Home Assistant](example_configs/home-assistant.md)
- [Jellyfin](example_configs/jellyfin.md)
- [Jenkins](example_configs/jenkins.md)
- [Jitsi Meet](example_configs/jitsi_meet.conf)
- [Kasm](example_configs/kasm.md)
- [KeyCloak](example_configs/keycloak.md)
- [LibreNMS](example_configs/librenms.md)
- [Mastodon](example_configs/mastodon.env.example)
- [Matrix](example_configs/matrix_synapse.yml)
- [Mealie](example_configs/mealie.md)
- [MinIO](example_configs/minio.md)
@@ -298,14 +390,37 @@ folder for help with:
- [Squid](example_configs/squid.md)
- [Syncthing](example_configs/syncthing.md)
- [TheLounge](example_configs/thelounge.md)
- [Traccar](example_configs/traccar.xml)
- [Vaultwarden](example_configs/vaultwarden.md)
- [WeKan](example_configs/wekan.md)
- [WG Portal](example_configs/wg_portal.env.example)
- [WikiJS](example_configs/wikijs.md)
- [XBackBone](example_configs/xbackbone_config.php)
- [Zendto](example_configs/zendto.md)
- [Zitadel](example_configs/zitadel.md)
- [Zulip](example_configs/zulip.md)
### Incompatible services
Though we try to be maximally compatible, not every feature is supported; LLDAP
is not a fully-featured LDAP server, intentionally so.
LDAP browsing tools are generally not supported, though they could be. If you
need to use one but it behaves weirdly, please file a bug.
Some services use features that are not implemented, or require specific
attributes. You can try to create those attributes (see custom attributes in
the [Usage](#usage) section).
Finally, some services require password hashes so they can validate themselves
the user's password without contacting LLDAP. This is not and will not be
supported, it's incompatible with our password hashing scheme (a zero-knowledge
proof). Furthermore, it's generally not recommended in terms of security, since
it duplicates the places from which a password hash could leak.
In that category, the most prominent is Synology. It is, to date, the only
service that seems definitely incompatible with LLDAP.
## Migrating from SQLite
If you started with an SQLite database and would like to migrate to

View File

@@ -6,7 +6,7 @@ homepage = "https://github.com/lldap/lldap"
license = "GPL-3.0-only"
name = "lldap_app"
repository = "https://github.com/lldap/lldap"
version = "0.5.0"
version = "0.5.1-alpha"
include = ["src/**/*", "queries/**/*", "Cargo.toml", "../schema.graphql"]
[dependencies]

View File

@@ -4,7 +4,8 @@
<head>
<meta charset="utf-8" />
<title>LLDAP Administration</title>
<script src="/static/main.js" type="module" defer></script>
<base href="/">
<script src="static/main.js" type="module" defer></script>
<link
href="https://cdn.jsdelivr.net/npm/bootstrap-dark-5@1.1.3/dist/css/bootstrap-nightshade.min.css"
rel="preload stylesheet"
@@ -33,7 +34,7 @@
href="https://fonts.googleapis.com/css2?family=Bebas+Neue&display=swap" />
<link
rel="stylesheet"
href="/static/style.css" />
href="static/style.css" />
<script>
function inDarkMode(){
return darkmode.inDarkMode;

View File

@@ -268,7 +268,7 @@ impl App {
<header class="p-2 mb-3 border-bottom">
<div class="container">
<div class="d-flex flex-wrap align-items-center justify-content-center justify-content-lg-start">
<a href="/" class="d-flex align-items-center mt-2 mb-lg-0 me-md-5 text-decoration-none">
<a href={yew_router::utils::base_url().unwrap_or("/".to_string())} class="d-flex align-items-center mt-2 mb-lg-0 me-md-5 text-decoration-none">
<h2>{"LLDAP"}</h2>
</a>
@@ -355,7 +355,7 @@ impl App {
<span>{format!("LLDAP version {}", env!("CARGO_PKG_VERSION"))}</span>
</div>
<div>
<a href="https://github.com/nitnelave/lldap" class="me-4 text-reset">
<a href="https://github.com/lldap/lldap" class="me-4 text-reset">
<i class="bi-github"></i>
</a>
<a href="https://discord.gg/h5PEdRMNyP" class="me-4 text-reset">
@@ -366,7 +366,7 @@ impl App {
</a>
</div>
<div>
<span>{"License "}<a href="https://github.com/nitnelave/lldap/blob/main/LICENSE" class="link-secondary">{"GNU GPL"}</a></span>
<span>{"License "}<a href="https://github.com/lldap/lldap/blob/main/LICENSE" class="link-secondary">{"GNU GPL"}</a></span>
</div>
</footer>
}

View File

@@ -97,7 +97,7 @@ impl CommonComponent<ChangePasswordForm> for ChangePasswordForm {
.context("Could not initialize login")?;
self.opaque_data = OpaqueData::Login(login_start_request.state);
let req = login::ClientLoginStartRequest {
username: ctx.props().username.clone(),
username: ctx.props().username.clone().into(),
login_start_request: login_start_request.message,
};
self.common.call_backend(
@@ -134,7 +134,7 @@ impl CommonComponent<ChangePasswordForm> for ChangePasswordForm {
)
.context("Could not initiate password change")?;
let req = registration::ClientRegistrationStartRequest {
username: ctx.props().username.clone(),
username: ctx.props().username.clone().into(),
registration_start_request: registration_start_request.message,
};
self.opaque_data = OpaqueData::Registration(registration_start_request.state);

View File

@@ -90,6 +90,7 @@ impl CommonComponent<CreateUserForm> for CreateUserForm {
firstName: to_option(model.first_name),
lastName: to_option(model.last_name),
avatar: None,
attributes: None,
},
};
self.common.call_graphql::<CreateUser, _>(
@@ -122,7 +123,7 @@ impl CommonComponent<CreateUserForm> for CreateUserForm {
&mut rng,
)?;
let req = registration::ClientRegistrationStartRequest {
username: user_id,
username: user_id.into(),
registration_start_request: message,
};
self.common

View File

@@ -66,7 +66,7 @@ impl CommonComponent<LoginForm> for LoginForm {
opaque::client::login::start_login(&password, &mut rng)
.context("Could not initialize login")?;
let req = login::ClientLoginStartRequest {
username,
username: username.into(),
login_start_request: message,
};
self.common

View File

@@ -68,7 +68,7 @@ impl CommonComponent<ResetPasswordStep2Form> for ResetPasswordStep2Form {
opaque_registration::start_registration(new_password.as_bytes(), &mut rng)
.context("Could not initiate password change")?;
let req = registration::ClientRegistrationStartRequest {
username: self.username.clone().unwrap(),
username: self.username.as_ref().unwrap().into(),
registration_start_request: registration_start_request.message,
};
self.opaque_data = Some(registration_start_request.state);

View File

@@ -23,10 +23,7 @@ struct JsFile {
impl ToString for JsFile {
fn to_string(&self) -> String {
self.file
.as_ref()
.map(File::name)
.unwrap_or_else(String::new)
self.file.as_ref().map(File::name).unwrap_or_default()
}
}
@@ -391,6 +388,8 @@ impl UserDetailsForm {
firstName: None,
lastName: None,
avatar: None,
removeAttributes: None,
insertAttributes: None,
};
let default_user_input = user_input.clone();
let model = self.form.model();

View File

@@ -18,6 +18,10 @@ fn get_claims_from_jwt(jwt: &str) -> Result<JWTClaims> {
const NO_BODY: Option<()> = None;
fn base_url() -> String {
yew_router::utils::base_url().unwrap_or_default()
}
async fn call_server(
url: &str,
body: Option<impl Serialize>,
@@ -97,7 +101,7 @@ impl HostService {
};
let request_body = QueryType::build_query(variables);
call_server_json_with_error_message::<graphql_client::Response<_>, _>(
"/api/graphql",
&(base_url() + "/api/graphql"),
Some(request_body),
error_message,
)
@@ -109,7 +113,7 @@ impl HostService {
request: login::ClientLoginStartRequest,
) -> Result<Box<login::ServerLoginStartResponse>> {
call_server_json_with_error_message(
"/auth/opaque/login/start",
&(base_url() + "/auth/opaque/login/start"),
Some(request),
"Could not start authentication: ",
)
@@ -118,7 +122,7 @@ impl HostService {
pub async fn login_finish(request: login::ClientLoginFinishRequest) -> Result<(String, bool)> {
call_server_json_with_error_message::<login::ServerLoginResponse, _>(
"/auth/opaque/login/finish",
&(base_url() + "/auth/opaque/login/finish"),
Some(request),
"Could not finish authentication",
)
@@ -130,7 +134,7 @@ impl HostService {
request: registration::ClientRegistrationStartRequest,
) -> Result<Box<registration::ServerRegistrationStartResponse>> {
call_server_json_with_error_message(
"/auth/opaque/register/start",
&(base_url() + "/auth/opaque/register/start"),
Some(request),
"Could not start registration: ",
)
@@ -141,7 +145,7 @@ impl HostService {
request: registration::ClientRegistrationFinishRequest,
) -> Result<()> {
call_server_empty_response_with_error_message(
"/auth/opaque/register/finish",
&(base_url() + "/auth/opaque/register/finish"),
Some(request),
"Could not finish registration",
)
@@ -150,7 +154,7 @@ impl HostService {
pub async fn refresh() -> Result<(String, bool)> {
call_server_json_with_error_message::<login::ServerLoginResponse, _>(
"/auth/refresh",
&(base_url() + "/auth/refresh"),
NO_BODY,
"Could not start authentication: ",
)
@@ -160,13 +164,21 @@ impl HostService {
// The `_request` parameter is to make it the same shape as the other functions.
pub async fn logout() -> Result<()> {
call_server_empty_response_with_error_message("/auth/logout", NO_BODY, "Could not logout")
.await
call_server_empty_response_with_error_message(
&(base_url() + "/auth/logout"),
NO_BODY,
"Could not logout",
)
.await
}
pub async fn reset_password_step1(username: String) -> Result<()> {
call_server_empty_response_with_error_message(
&format!("/auth/reset/step1/{}", url_escape::encode_query(&username)),
&format!(
"{}/auth/reset/step1/{}",
base_url(),
url_escape::encode_query(&username)
),
NO_BODY,
"Could not initiate password reset",
)
@@ -177,7 +189,7 @@ impl HostService {
token: String,
) -> Result<lldap_auth::password_reset::ServerPasswordResetResponse> {
call_server_json_with_error_message(
&format!("/auth/reset/step2/{}", token),
&format!("{}/auth/reset/step2/{}", base_url(), token),
NO_BODY,
"Could not validate token",
)
@@ -185,13 +197,13 @@ impl HostService {
}
pub async fn probe_password_reset() -> Result<bool> {
Ok(
gloo_net::http::Request::get("/auth/reset/step1/lldap_unlikely_very_long_user_name")
.header("Content-Type", "application/json")
.send()
.await?
.status()
!= http::StatusCode::NOT_FOUND,
Ok(gloo_net::http::Request::get(
&(base_url() + "/auth/reset/step1/lldap_unlikely_very_long_user_name"),
)
.header("Content-Type", "application/json")
.send()
.await?
.status()
!= http::StatusCode::NOT_FOUND)
}
}

View File

@@ -22,10 +22,11 @@ pub fn set_cookie(cookie_name: &str, value: &str, expiration: &DateTime<Utc>) ->
.map_err(|_| anyhow!("Document is not an HTMLDocument"))
})?;
let cookie_string = format!(
"{}={}; expires={}; sameSite=Strict; path=/",
"{}={}; expires={}; sameSite=Strict; path={}/",
cookie_name,
value,
expiration.to_rfc2822()
expiration.to_rfc2822(),
yew_router::utils::base_url().unwrap_or_default()
);
doc.set_cookie(&cookie_string)
.map_err(|_| anyhow!("Could not set cookie"))

View File

@@ -13,12 +13,13 @@ default = ["opaque_server", "opaque_client"]
opaque_server = []
opaque_client = []
js = []
sea_orm = ["dep:sea-orm"]
[dependencies]
rust-argon2 = "0.8"
curve25519-dalek = "3"
digest = "0.9"
generic-array = "*"
generic-array = "0.14"
rand = "0.8"
serde = "*"
sha2 = "0.9"
@@ -31,6 +32,12 @@ version = "0.6"
version = "*"
features = [ "serde" ]
[dependencies.sea-orm]
version= "0.12"
default-features = false
features = ["macros"]
optional = true
# For WASM targets, use the JS getrandom.
[target.'cfg(not(target_arch = "wasm32"))'.dependencies.getrandom]
version = "0.2"

View File

@@ -9,17 +9,17 @@ pub mod opaque;
/// The messages for the 3-step OPAQUE and simple login process.
pub mod login {
use super::*;
use super::{types::UserId, *};
#[derive(Serialize, Deserialize, Clone)]
pub struct ServerData {
pub username: String,
pub username: UserId,
pub server_login: opaque::server::login::ServerLogin,
}
#[derive(Serialize, Deserialize, Clone)]
pub struct ClientLoginStartRequest {
pub username: String,
pub username: UserId,
pub login_start_request: opaque::server::login::CredentialRequest,
}
@@ -39,14 +39,14 @@ pub mod login {
#[derive(Serialize, Deserialize, Clone)]
pub struct ClientSimpleLoginRequest {
pub username: String,
pub username: UserId,
pub password: String,
}
impl fmt::Debug for ClientSimpleLoginRequest {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
f.debug_struct("ClientSimpleLoginRequest")
.field("username", &self.username)
.field("username", &self.username.as_str())
.field("password", &"***********")
.finish()
}
@@ -63,16 +63,16 @@ pub mod login {
/// The messages for the 3-step OPAQUE registration process.
/// It is used to reset a user's password.
pub mod registration {
use super::*;
use super::{types::UserId, *};
#[derive(Serialize, Deserialize, Clone)]
pub struct ServerData {
pub username: String,
pub username: UserId,
}
#[derive(Serialize, Deserialize, Clone)]
pub struct ClientRegistrationStartRequest {
pub username: String,
pub username: UserId,
pub registration_start_request: opaque::server::registration::RegistrationRequest,
}
@@ -104,6 +104,100 @@ pub mod password_reset {
}
}
pub mod types {
use serde::{Deserialize, Serialize};
#[cfg(feature = "sea_orm")]
use sea_orm::{DbErr, DeriveValueType, QueryResult, TryFromU64, Value};
#[derive(
PartialEq, Eq, PartialOrd, Ord, Clone, Debug, Default, Hash, Serialize, Deserialize,
)]
#[cfg_attr(feature = "sea_orm", derive(DeriveValueType))]
#[serde(from = "String")]
pub struct CaseInsensitiveString(String);
impl CaseInsensitiveString {
pub fn new(s: &str) -> Self {
Self(s.to_ascii_lowercase())
}
pub fn as_str(&self) -> &str {
self.0.as_str()
}
pub fn into_string(self) -> String {
self.0
}
}
impl From<String> for CaseInsensitiveString {
fn from(mut s: String) -> Self {
s.make_ascii_lowercase();
Self(s)
}
}
impl From<&String> for CaseInsensitiveString {
fn from(s: &String) -> Self {
Self::new(s.as_str())
}
}
impl From<&str> for CaseInsensitiveString {
fn from(s: &str) -> Self {
Self::new(s)
}
}
#[derive(
PartialEq, Eq, PartialOrd, Ord, Clone, Debug, Default, Hash, Serialize, Deserialize,
)]
#[cfg_attr(feature = "sea_orm", derive(DeriveValueType))]
#[serde(from = "CaseInsensitiveString")]
pub struct UserId(CaseInsensitiveString);
impl UserId {
pub fn new(s: &str) -> Self {
s.into()
}
pub fn as_str(&self) -> &str {
self.0.as_str()
}
pub fn into_string(self) -> String {
self.0.into_string()
}
}
impl<T> From<T> for UserId
where
T: Into<CaseInsensitiveString>,
{
fn from(s: T) -> Self {
Self(s.into())
}
}
impl std::fmt::Display for UserId {
fn fmt(&self, f: &mut std::fmt::Formatter) -> std::fmt::Result {
write!(f, "{}", self.0.as_str())
}
}
#[cfg(feature = "sea_orm")]
impl From<&UserId> for Value {
fn from(user_id: &UserId) -> Self {
user_id.as_str().into()
}
}
#[cfg(feature = "sea_orm")]
impl TryFromU64 for UserId {
fn try_from_u64(_n: u64) -> Result<Self, DbErr> {
Err(DbErr::ConvertFromU64(
"UserId cannot be constructed from u64",
))
}
}
}
#[derive(Clone, Serialize, Deserialize)]
pub struct JWTClaims {
pub exp: DateTime<Utc>,

View File

@@ -1,3 +1,4 @@
use crate::types::UserId;
use opaque_ke::ciphersuite::CipherSuite;
use rand::{CryptoRng, RngCore};
@@ -145,12 +146,12 @@ pub mod server {
pub fn start_registration(
server_setup: &ServerSetup,
registration_request: RegistrationRequest,
username: &str,
username: &UserId,
) -> AuthenticationResult<ServerRegistrationStartResult> {
Ok(ServerRegistration::start(
server_setup,
registration_request,
username.as_bytes(),
username.as_str().as_bytes(),
)?)
}
@@ -178,14 +179,14 @@ pub mod server {
server_setup: &ServerSetup,
password_file: Option<ServerRegistration>,
credential_request: CredentialRequest,
username: &str,
username: &UserId,
) -> AuthenticationResult<ServerLoginStartResult> {
Ok(ServerLogin::start(
rng,
server_setup,
password_file,
credential_request,
username.as_bytes(),
username.as_str().as_bytes(),
ServerLoginStartParameters::default(),
)?)
}

20
docker-entrypoint-rootless.sh Executable file
View File

@@ -0,0 +1,20 @@
#!/usr/bin/env bash
set -euo pipefail
CONFIG_FILE=/data/lldap_config.toml
if [ ! -f "$CONFIG_FILE" ]; then
echo "[entrypoint] Copying the default config to $CONFIG_FILE"
echo "[entrypoint] Edit this $CONFIG_FILE to configure LLDAP."
if cp /app/lldap_config.docker_template.toml $CONFIG_FILE; then
echo "Configuration copied successfully."
else
echo "Fail to copy configuration, check permission on /data or manually create one by copying from LLDAP repository"
exit 1
fi
fi
echo "> Starting lldap.."
echo ""
exec /app/lldap "$@"
exec "$@"

View File

@@ -51,8 +51,9 @@ format to PostgreSQL format, and wrap it all in a transaction:
```sh
sed -i -r -e "s/X'([[:xdigit:]]+'[^'])/'\\\x\\1/g" \
-e ":a; s/(INSERT INTO user_attribute_schema\(.*\) VALUES\(.*),1([^']*\);)$/\1,true\2/; s/(INSERT INTO user_attribute_schema\(.*\) VALUES\(.*),0([^']*\);)$/\1,false\2/; ta" \
-e ":a; s/(INSERT INTO (user_attribute_schema|jwt_storage)\(.*\) VALUES\(.*),1([^']*\);)$/\1,true\3/; s/(INSERT INTO (user_attribute_schema|jwt_storage)\(.*\) VALUES\(.*),0([^']*\);)$/\1,false\3/; ta" \
-e '1s/^/BEGIN;\n/' \
-e '$aSELECT setval(pg_get_serial_sequence('\''groups'\'', '\''group_id'\''), COALESCE((SELECT MAX(group_id) FROM groups), 1));' \
-e '$aCOMMIT;' /path/to/dump.sql
```
@@ -107,4 +108,4 @@ Modify your `database_url` in `lldap_config.toml` (or `LLDAP_DATABASE_URL` in th
to point to your new database (the same value used when generating schema). Restart
LLDAP and check the logs to ensure there were no errors.
#### More details/examples can be seen in the CI process [here](https://raw.githubusercontent.com/nitnelave/lldap/main/.github/workflows/docker-build-static.yml), look for the job `lldap-database-migration-test`
#### More details/examples can be seen in the CI process [here](https://raw.githubusercontent.com/lldap/lldap/main/.github/workflows/docker-build-static.yml), look for the job `lldap-database-migration-test`

View File

@@ -18,6 +18,15 @@ still supports basic RootDSE queries.
Anonymous bind is not supported.
## `lldap-cli`
There is a community-built CLI frontend,
[Zepmann/lldap-cli](https://github.com/Zepmann/lldap-cli), that supports all
(as of this writing) the operations possible. Getting information from the
server, creating users, adding them to groups, creating new custom attributes
and populating them, all of that is supported. It is currently the easiest way
to script the interaction with LLDAP.
## GraphQL
The best way to interact with LLDAP programmatically is via the GraphQL

View File

@@ -0,0 +1,18 @@
# Configuration for Apereo CAS Server
Replace `dc=example,dc=com` with your LLDAP configured domain, and hostname for your LLDAP server.
The `search-filter` provided here requires users to be members of the `cas_auth` group in LLDAP.
Configuration to use LDAP in e.g. `/etc/cas/config/standalone.yml`
```
cas:
authn:
ldap:
- base-dn: dc=example,dc=com
bind-credential: password
bind-dn: uid=admin,ou=people,dc=example,dc=com
ldap-url: ldap://ldap.example.com:3890
search-filter: (&(objectClass=person)(memberOf=uid=cas_auth,ou=groups,dc=example,dc=com))
```

View File

@@ -33,7 +33,7 @@ authentication_backend:
users_filter: "(&({username_attribute}={input})(objectClass=person))"
# Set this to ou=groups, because all groups are stored in this ou
additional_groups_dn: ou=groups
# Only this filter is supported right now
# The groups are not displayed in the UI, but this filter works.
groups_filter: "(member={dn})"
# The attribute holding the name of the group.
group_name_attribute: cn

Binary file not shown.

After

Width:  |  Height:  |  Size: 51 KiB

View File

@@ -0,0 +1,254 @@
# Bootstrapping lldap using [bootstrap.sh](bootstrap.sh) script
bootstrap.sh allows managing your lldap in a git-ops, declarative way using JSON config files.
The script can:
* create, update users
* set/update all lldap built-in user attributes
* add/remove users to/from corresponding groups
* set/update user avatar from file, link or from gravatar by user email
* set/update user password
* create groups
* delete redundant users and groups (when `DO_CLEANUP` env var is true)
* maintain the desired state described in JSON config files
![](bootstrap-example-log-1.jpeg)
## Required packages
> The script will automatically install the required packages for alpine and debian-based distributions
> when run by root, or you can install them by yourself.
- curl
- [jq](https://github.com/jqlang/jq)
- [jo](https://github.com/jpmens/jo)
## Environment variables
- `LLDAP_URL` or `LLDAP_URL_FILE` - URL to your lldap instance or path to file that contains URL (**MANDATORY**)
- `LLDAP_ADMIN_USERNAME` or `LLDAP_ADMIN_USERNAME_FILE` - admin username or path to file that contains username (**MANDATORY**)
- `LLDAP_ADMIN_PASSWORD` or `LLDAP_ADMIN_PASSWORD_FILE` - admin password or path to file that contains password (**MANDATORY**)
- `USER_CONFIGS_DIR` (default value: `/user-configs`) - directory where the user JSON configs could be found
- `GROUP_CONFIGS_DIR` (default value: `/group-configs`) - directory where the group JSON configs could be found
- `LLDAP_SET_PASSWORD_PATH` - path to the `lldap_set_password` utility (default value: `/app/lldap_set_password`)
- `DO_CLEANUP` (default value: `false`) - delete groups and users not specified in config files, also remove users from groups that they do not belong to
## Config files
There are two types of config files: [group](#group-config-file-example) and [user](#user-config-file-example) configs.
Each config file can be as one JSON file with nested JSON top-level values as several JSON files.
### Group config file example
Group configs are used to define groups that will be created by the script
Fields description:
* `name`: name of the group (**MANDATORY**)
```json
{
"name": "group-1"
}
{
"name": "group-2"
}
```
### User config file example
User config defines all the lldap user structures,
if the non-mandatory field is omitted, the script will clean this field in lldap as well.
Fields description:
* `id`: it's just username (**MANDATORY**)
* `email`: self-explanatory (**MANDATORY**)
* `password`: would be used to set the password using `lldap_set_password` utility
* `displayName`: self-explanatory
* `firstName`: self-explanatory
* `lastName`: self-explanatory
* `avatar_file`: must be a valid path to jpeg file (ignored if `avatar_url` specified)
* `avatar_url`: must be a valid URL to jpeg file (ignored if `gravatar_avatar` specified)
* `gravatar_avatar` (`false` by default): the script will try to get an avatar from [gravatar](https://gravatar.com/) by previously specified `email` (has the highest priority)
* `weserv_avatar` (`false` by default): avatar file from `avatar_url` or `gravatar_avatar` would be converted to jpeg using [wsrv.nl](https://wsrv.nl) (useful when your avatar is png)
* `groups`: an array of groups the user would be a member of (all the groups must be specified in group config files)
```json
{
"id": "username",
"email": "username@example.com",
"password": "changeme",
"displayName": "Display Name",
"firstName": "First",
"lastName": "Last",
"avatar_file": "/path/to/avatar.jpg",
"avatar_url": "https://i.imgur.com/nbCxk3z.jpg",
"gravatar_avatar": "false",
"weserv_avatar": "false",
"groups": [
"group-1",
"group-2"
]
}
```
## Usage example
### Manually
The script can be run manually in the terminal for initial bootstrapping of your lldap instance.
You should make sure that the [required packages](#required-packages) are installed
and the [environment variables](#environment-variables) are configured properly.
```bash
export LLDAP_URL=http://localhost:8080
export LLDAP_ADMIN_USERNAME=admin
export LLDAP_ADMIN_PASSWORD=changeme
export USER_CONFIGS_DIR="$(realpath ./configs/user)"
export GROUP_CONFIGS_DIR="$(realpath ./configs/group)"
export LLDAP_SET_PASSWORD_PATH="$(realpath ./lldap_set_password)"
export DO_CLEANUP=false
./bootstrap.sh
```
### Docker compose
Let's suppose you have the next file structure:
```text
./
├─ docker-compose.yaml
└─ bootstrap
├─ bootstrap.sh
└─ user-configs
│ ├─ user-1.json
│ ├─ ...
│ └─ user-n.json
└─ group-configs
├─ group-1.json
├─ ...
└─ group-n.json
```
You should mount `bootstrap` dir to lldap container and set the corresponding `env` variables:
```yaml
version: "3"
services:
lldap:
image: lldap/lldap:v0.5.0
volumes:
- ./bootstrap:/bootstrap
ports:
- "3890:3890" # For LDAP
- "17170:17170" # For the web front-end
environment:
# envs required for lldap
- LLDAP_LDAP_USER_EMAIL=admin@example.com
- LLDAP_LDAP_USER_PASS=changeme
- LLDAP_LDAP_BASE_DN=dc=example,dc=com
# envs required for bootstrap.sh
- LLDAP_URL=http://localhost:17170
- LLDAP_ADMIN_USERNAME=admin
- LLDAP_ADMIN_PASSWORD=changeme # same as LLDAP_LDAP_USER_PASS
- USER_CONFIGS_DIR=/bootstrap/user-configs
- GROUP_CONFIGS_DIR=/bootstrap/group-configs
- DO_CLEANUP=false
```
Then, to bootstrap your lldap just run `docker compose exec lldap /bootstrap/bootstrap.sh`.
If config files were changed, re-run the `bootstrap.sh` with the same command.
### Kubernetes job
```yaml
apiVersion: batch/v1
kind: Job
metadata:
name: lldap-bootstrap
# Next annotations are required if the job managed by Argo CD,
# so Argo CD can relaunch the job on every app sync action
annotations:
argocd.argoproj.io/hook: PostSync
argocd.argoproj.io/hook-delete-policy: BeforeHookCreation
spec:
template:
spec:
restartPolicy: OnFailure
containers:
- name: lldap-bootstrap
image: lldap/lldap:v0.5.0
command:
- /bootstrap/bootstrap.sh
env:
- name: LLDAP_URL
value: "http://lldap:8080"
- name: LLDAP_ADMIN_USERNAME
valueFrom: { secretKeyRef: { name: lldap-admin-user, key: username } }
- name: LLDAP_ADMIN_PASSWORD
valueFrom: { secretKeyRef: { name: lldap-admin-user, key: password } }
- name: DO_CLEANUP
value: "true"
volumeMounts:
- name: bootstrap
mountPath: /bootstrap/bootstrap.sh
subPath: bootstrap.sh
- name: user-configs
mountPath: /user-configs
readOnly: true
- name: group-configs
mountPath: /group-configs
readOnly: true
volumes:
- name: bootstrap
configMap:
name: bootstrap
defaultMode: 0555
items:
- key: bootstrap.sh
path: bootstrap.sh
- name: user-configs
projected:
sources:
- secret:
name: lldap-admin-user
items:
- key: user-config.json
path: admin-config.json
- secret:
name: lldap-password-manager-user
items:
- key: user-config.json
path: password-manager-config.json
- secret:
name: lldap-bootstrap-configs
items:
- key: user-configs.json
path: user-configs.json
- name: group-configs
projected:
sources:
- secret:
name: lldap-bootstrap-configs
items:
- key: group-configs.json
path: group-configs.json
```

View File

@@ -0,0 +1,490 @@
#!/usr/bin/env bash
set -e
set -o pipefail
LLDAP_URL="${LLDAP_URL}"
LLDAP_ADMIN_USERNAME="${LLDAP_ADMIN_USERNAME}"
LLDAP_ADMIN_PASSWORD="${LLDAP_ADMIN_PASSWORD}"
USER_CONFIGS_DIR="${USER_CONFIGS_DIR:-/user-configs}"
GROUP_CONFIGS_DIR="${GROUP_CONFIGS_DIR:-/group-configs}"
LLDAP_SET_PASSWORD_PATH="${LLDAP_SET_PASSWORD_PATH:-/app/lldap_set_password}"
DO_CLEANUP="${DO_CLEANUP:-false}"
check_install_dependencies() {
local commands=('curl' 'jq' 'jo')
local commands_not_found='false'
if ! hash "${commands[@]}" 2>/dev/null; then
if hash 'apk' 2>/dev/null && [[ $EUID -eq 0 ]]; then
apk add "${commands[@]}"
elif hash 'apt' 2>/dev/null && [[ $EUID -eq 0 ]]; then
apt update -yqq
apt install -yqq "${commands[@]}"
else
local command=''
for command in "${commands[@]}"; do
if ! hash "$command" 2>/dev/null; then
printf 'Command not found "%s"\n' "$command"
fi
done
commands_not_found='true'
fi
fi
if [[ "$commands_not_found" == 'true' ]]; then
return 1
fi
}
check_required_env_vars() {
local env_var_not_specified='false'
local dual_env_vars_list=(
'LLDAP_URL'
'LLDAP_ADMIN_USERNAME'
'LLDAP_ADMIN_PASSWORD'
)
local dual_env_var_name=''
for dual_env_var_name in "${dual_env_vars_list[@]}"; do
local dual_env_var_file_name="${dual_env_var_name}_FILE"
if [[ -z "${!dual_env_var_name}" ]] && [[ -z "${!dual_env_var_file_name}" ]]; then
printf 'Please specify "%s" or "%s" variable!\n' "$dual_env_var_name" "$dual_env_var_file_name" >&2
env_var_not_specified='true'
else
if [[ -n "${!dual_env_var_file_name}" ]]; then
declare -g "$dual_env_var_name"="$(cat "${!dual_env_var_file_name}")"
fi
fi
done
if [[ "$env_var_not_specified" == 'true' ]]; then
return 1
fi
}
check_configs_validity() {
local config_file='' config_invalid='false'
for config_file in "$@"; do
local error=''
if ! error="$(jq '.' -- "$config_file" 2>&1 >/dev/null)"; then
printf '%s: %s\n' "$config_file" "$error"
config_invalid='true'
fi
done
if [[ "$config_invalid" == 'true' ]]; then
return 1
fi
}
auth() {
local url="$1" admin_username="$2" admin_password="$3"
local response
response="$(curl --silent --request POST \
--url "$url/auth/simple/login" \
--header 'Content-Type: application/json' \
--data "$(jo -- username="$admin_username" password="$admin_password")")"
TOKEN="$(printf '%s' "$response" | jq --raw-output .token)"
}
make_query() {
local query_file="$1" variables_file="$2"
curl --silent --request POST \
--url "$LLDAP_URL/api/graphql" \
--header "Authorization: Bearer $TOKEN" \
--header 'Content-Type: application/json' \
--data @<(jq --slurpfile variables "$variables_file" '. + {"variables": $variables[0]}' "$query_file")
}
get_group_list() {
local query='{"query":"query GetGroupList {groups {id displayName}}","operationName":"GetGroupList"}'
make_query <(printf '%s' "$query") <(printf '{}')
}
get_group_array() {
get_group_list | jq --raw-output '.data.groups[].displayName'
}
group_exists() {
if [[ "$(get_group_list | jq --raw-output --arg displayName "$1" '.data.groups | any(.[]; select(.displayName == $displayName))')" == 'true' ]]; then
return 0
else
return 1
fi
}
get_group_id() {
get_group_list | jq --raw-output --arg displayName "$1" '.data.groups[] | if .displayName == $displayName then .id else empty end'
}
create_group() {
local group_name="$1"
if group_exists "$group_name"; then
printf 'Group "%s" (%s) already exists\n' "$group_name" "$(get_group_id "$group_name")"
return
fi
# shellcheck disable=SC2016
local query='{"query":"mutation CreateGroup($name: String!) {createGroup(name: $name) {id displayName}}","operationName":"CreateGroup"}'
local response='' error=''
response="$(make_query <(printf '%s' "$query") <(jo -- name="$group_name"))"
error="$(printf '%s' "$response" | jq --raw-output '.errors | if . != null then .[].message else empty end')"
if [[ -n "$error" ]]; then
printf '%s\n' "$error"
else
printf 'Group "%s" (%s) successfully created\n' "$group_name" "$(printf '%s' "$response" | jq --raw-output '.data.createGroup.id')"
fi
}
delete_group() {
local group_name="$1" id=''
if ! group_exists "$group_name"; then
printf '[WARNING] Group "%s" does not exist\n' "$group_name"
return
fi
id="$(get_group_id "$group_name")"
# shellcheck disable=SC2016
local query='{"query":"mutation DeleteGroupQuery($groupId: Int!) {deleteGroup(groupId: $groupId) {ok}}","operationName":"DeleteGroupQuery"}'
local response='' error=''
response="$(make_query <(printf '%s' "$query") <(jo -- groupId="$id"))"
error="$(printf '%s' "$response" | jq --raw-output '.errors | if . != null then .[].message else empty end')"
if [[ -n "$error" ]]; then
printf '%s\n' "$error"
else
printf 'Group "%s" (%s) successfully deleted\n' "$group_name" "$id"
fi
}
get_user_details() {
local id="$1"
# shellcheck disable=SC2016
local query='{"query":"query GetUserDetails($id: String!) {user(userId: $id) {id email displayName firstName lastName creationDate uuid groups {id displayName}}}","operationName":"GetUserDetails"}'
make_query <(printf '%s' "$query") <(jo -- id="$id")
}
user_in_group() {
local user_id="$1" group_name="$2"
if ! group_exists "$group_name"; then
printf '[WARNING] Group "%s" does not exist\n' "$group_name"
return
fi
if ! user_exists "$user_id"; then
printf 'User "%s" is not exists\n' "$user_id"
return
fi
if [[ "$(get_user_details "$user_id" | jq --raw-output --arg displayName "$group_name" '.data.user.groups | any(.[]; select(.displayName == $displayName))')" == 'true' ]]; then
return 0
else
return 1
fi
}
add_user_to_group() {
local user_id="$1" group_name="$2" group_id=''
if ! group_exists "$group_name"; then
printf '[WARNING] Group "%s" does not exist\n' "$group_name"
return
fi
group_id="$(get_group_id "$group_name")"
if user_in_group "$user_id" "$group_name"; then
printf 'User "%s" already in group "%s" (%s)\n' "$user_id" "$group_name" "$group_id"
return
fi
# shellcheck disable=SC2016
local query='{"query":"mutation AddUserToGroup($user: String!, $group: Int!) {addUserToGroup(userId: $user, groupId: $group) {ok}}","operationName":"AddUserToGroup"}'
local response='' error=''
response="$(make_query <(printf '%s' "$query") <(jo -- user="$user_id" group="$group_id"))"
error="$(printf '%s' "$response" | jq '.errors | if . != null then .[].message else empty end')"
if [[ -n "$error" ]]; then
printf '%s\n' "$error"
else
printf 'User "%s" successfully added to the group "%s" (%s)\n' "$user_id" "$group_name" "$group_id"
fi
}
remove_user_from_group() {
local user_id="$1" group_name="$2" group_id=''
if ! group_exists "$group_name"; then
printf '[WARNING] Group "%s" does not exist\n' "$group_name"
return
fi
group_id="$(get_group_id "$group_name")"
# shellcheck disable=SC2016
local query='{"operationName":"RemoveUserFromGroup","query":"mutation RemoveUserFromGroup($user: String!, $group: Int!) {removeUserFromGroup(userId: $user, groupId: $group) {ok}}"}'
local response='' error=''
response="$(make_query <(printf '%s' "$query") <(jo -- user="$user_id" group="$group_id"))"
error="$(printf '%s' "$response" | jq '.errors | if . != null then .[].message else empty end')"
if [[ -n "$error" ]]; then
printf '%s\n' "$error"
else
printf 'User "%s" successfully removed from the group "%s" (%s)\n' "$user_id" "$group_name" "$group_id"
fi
}
get_users_list() {
# shellcheck disable=SC2016
local query='{"query": "query ListUsersQuery($filters: RequestFilter) {users(filters: $filters) {id email displayName firstName lastName creationDate}}","operationName": "ListUsersQuery"}'
make_query <(printf '%s' "$query") <(jo -- filters=null)
}
user_exists() {
if [[ "$(get_users_list | jq --raw-output --arg id "$1" '.data.users | any(.[]; contains({"id": $id}))')" == 'true' ]]; then
return 0
else
return 1
fi
}
delete_user() {
local id="$1"
if ! user_exists "$id"; then
printf 'User "%s" is not exists\n' "$id"
return
fi
# shellcheck disable=SC2016
local query='{"query": "mutation DeleteUserQuery($user: String!) {deleteUser(userId: $user) {ok}}","operationName": "DeleteUserQuery"}'
local response='' error=''
response="$(make_query <(printf '%s' "$query") <(jo -- user="$id"))"
error="$(printf '%s' "$response" | jq --raw-output '.errors | if . != null then .[].message else empty end')"
if [[ -n "$error" ]]; then
printf '%s\n' "$error"
else
printf 'User "%s" successfully deleted\n' "$id"
fi
}
__common_user_mutation_query() {
local \
query="$1" \
id="${2:-null}" \
email="${3:-null}" \
displayName="${4:-null}" \
firstName="${5:-null}" \
lastName="${6:-null}" \
avatar_file="${7:-null}" \
avatar_url="${8:-null}" \
gravatar_avatar="${9:-false}" \
weserv_avatar="${10:-false}"
local variables_arr=(
'-s' "id=$id"
'-s' "email=$email"
'-s' "displayName=$displayName"
'-s' "firstName=$firstName"
'-s' "lastName=$lastName"
)
local temp_avatar_file=''
if [[ "$gravatar_avatar" == 'true' ]]; then
avatar_url="https://gravatar.com/avatar/$(printf '%s' "$email" | sha256sum | cut -d ' ' -f 1)?size=512"
fi
if [[ "$avatar_url" != 'null' ]]; then
temp_avatar_file="${TMP_AVATAR_DIR}/$(printf '%s' "$avatar_url" | md5sum | cut -d ' ' -f 1)"
if ! [[ -f "$temp_avatar_file" ]]; then
if [[ "$weserv_avatar" == 'true' ]]; then
avatar_url="https://wsrv.nl/?url=$avatar_url&output=jpg"
fi
curl --silent --location --output "$temp_avatar_file" "$avatar_url"
fi
avatar_file="$temp_avatar_file"
fi
if [[ "$avatar_file" == 'null' ]]; then
variables_arr+=('-s' 'avatar=null')
else
variables_arr+=("avatar=%$avatar_file")
fi
make_query <(printf '%s' "$query") <(jo -- user=:<(jo -- "${variables_arr[@]}"))
}
create_user() {
local id="$1"
if user_exists "$id"; then
printf 'User "%s" already exists\n' "$id"
return
fi
# shellcheck disable=SC2016
local query='{"query":"mutation CreateUser($user: CreateUserInput!) {createUser(user: $user) {id creationDate}}","operationName":"CreateUser"}'
local response='' error=''
response="$(__common_user_mutation_query "$query" "$@")"
error="$(printf '%s' "$response" | jq --raw-output '.errors | if . != null then .[].message else empty end')"
if [[ -n "$error" ]]; then
printf '%s\n' "$error"
else
printf 'User "%s" successfully created\n' "$id"
fi
}
update_user() {
local id="$1"
if ! user_exists "$id"; then
printf 'User "%s" is not exists\n' "$id"
return
fi
# shellcheck disable=SC2016
local query='{"query":"mutation UpdateUser($user: UpdateUserInput!) {updateUser(user: $user) {ok}}","operationName":"UpdateUser"}'
local response='' error=''
response="$(__common_user_mutation_query "$query" "$@")"
error="$(printf '%s' "$response" | jq --raw-output '.errors | if . != null then .[].message else empty end')"
if [[ -n "$error" ]]; then
printf '%s\n' "$error"
else
printf 'User "%s" successfully updated\n' "$id"
fi
}
create_update_user() {
local id="$1"
if user_exists "$id"; then
update_user "$@"
else
create_user "$@"
fi
}
main() {
check_install_dependencies
check_required_env_vars
local user_config_files=("${USER_CONFIGS_DIR}"/*.json)
local group_config_files=("${GROUP_CONFIGS_DIR}"/*.json)
if ! check_configs_validity "${group_config_files[@]}" "${user_config_files[@]}"; then
exit 1
fi
until curl --silent -o /dev/null "$LLDAP_URL"; do
printf 'Waiting lldap to start...\n'
sleep 10
done
auth "$LLDAP_URL" "$LLDAP_ADMIN_USERNAME" "$LLDAP_ADMIN_PASSWORD"
local redundant_groups=''
redundant_groups="$(get_group_list | jq '[ .data.groups[].displayName ]' | jq --compact-output '. - ["lldap_admin","lldap_password_manager","lldap_strict_readonly"]')"
printf -- '\n--- groups ---\n'
local group_config=''
while read -r group_config; do
local group_name=''
group_name="$(printf '%s' "$group_config" | jq --raw-output '.name')"
create_group "$group_name"
redundant_groups="$(printf '%s' "$redundant_groups" | jq --compact-output --arg name "$group_name" '. - [$name]')"
done < <(jq --compact-output '.' -- "${group_config_files[@]}")
printf -- '--- groups ---\n'
printf -- '\n--- redundant groups ---\n'
if [[ "$redundant_groups" == '[]' ]]; then
printf 'There are no redundant groups\n'
else
local group_name=''
while read -r group_name; do
if [[ "$DO_CLEANUP" == 'true' ]]; then
delete_group "$group_name"
else
printf '[WARNING] Group "%s" is not declared in config files\n' "$group_name"
fi
done < <(printf '%s' "$redundant_groups" | jq --raw-output '.[]')
fi
printf -- '--- redundant groups ---\n'
local redundant_users=''
redundant_users="$(get_users_list | jq '[ .data.users[].id ]' | jq --compact-output --arg admin_id "$LLDAP_ADMIN_USERNAME" '. - [$admin_id]')"
TMP_AVATAR_DIR="$(mktemp -d)"
local user_config=''
while read -r user_config; do
local field='' id='' email='' displayName='' firstName='' lastName='' avatar_file='' avatar_url='' gravatar_avatar='' weserv_avatar='' password=''
for field in 'id' 'email' 'displayName' 'firstName' 'lastName' 'avatar_file' 'avatar_url' 'gravatar_avatar' 'weserv_avatar' 'password'; do
declare "$field"="$(printf '%s' "$user_config" | jq --raw-output --arg field "$field" '.[$field]')"
done
printf -- '\n--- %s ---\n' "$id"
create_update_user "$id" "$email" "$displayName" "$firstName" "$lastName" "$avatar_file" "$avatar_url" "$gravatar_avatar" "$weserv_avatar"
redundant_users="$(printf '%s' "$redundant_users" | jq --compact-output --arg id "$id" '. - [$id]')"
if [[ "$password" != 'null' ]] && [[ "$password" != '""' ]]; then
"$LLDAP_SET_PASSWORD_PATH" --base-url "$LLDAP_URL" --token "$TOKEN" --username "$id" --password "$password"
fi
local redundant_user_groups=''
redundant_user_groups="$(get_user_details "$id" | jq '[ .data.user.groups[].displayName ]')"
local group=''
while read -r group; do
if [[ -n "$group" ]]; then
add_user_to_group "$id" "$group"
redundant_user_groups="$(printf '%s' "$redundant_user_groups" | jq --compact-output --arg group "$group" '. - [$group]')"
fi
done < <(printf '%s' "$user_config" | jq --raw-output '.groups | if . == null then "" else .[] end')
local user_group_name=''
while read -r user_group_name; do
if [[ "$DO_CLEANUP" == 'true' ]]; then
remove_user_from_group "$id" "$user_group_name"
else
printf '[WARNING] User "%s" is not declared as member of the "%s" group in the config files\n' "$id" "$user_group_name"
fi
done < <(printf '%s' "$redundant_user_groups" | jq --raw-output '.[]')
printf -- '--- %s ---\n' "$id"
done < <(jq --compact-output '.' -- "${user_config_files[@]}")
rm -r "$TMP_AVATAR_DIR"
printf -- '\n--- redundant users ---\n'
if [[ "$redundant_users" == '[]' ]]; then
printf 'There are no redundant users\n'
else
local id=''
while read -r id; do
if [[ "$DO_CLEANUP" == 'true' ]]; then
delete_user "$id"
else
printf '[WARNING] User "%s" is not declared in config files\n' "$id"
fi
done < <(printf '%s' "$redundant_users" | jq --raw-output '.[]')
fi
printf -- '--- redundant users ---\n'
}
main "$@"

View File

@@ -6,6 +6,7 @@ LDAP configuration is in ```/dokuwiki/conf/local.protected.php```:
<?php
$conf['useacl'] = 1; //enable ACL
$conf['authtype'] = 'authldap'; //enable this Auth plugin
$conf['superuser'] = 'admin';
$conf['plugin']['authldap']['server'] = 'ldap://lldap_server:3890'; #IP of your lldap
$conf['plugin']['authldap']['usertree'] = 'ou=people,dc=example,dc=com';
$conf['plugin']['authldap']['grouptree'] = 'ou=groups, dc=example, dc=com';

30
example_configs/gitlab.md Normal file
View File

@@ -0,0 +1,30 @@
# GitLab Configuration
Members of the group ``git_user`` will have access to GitLab.
Edit ``/etc/gitlab/gitlab.rb``:
```ruby
gitlab_rails['ldap_enabled'] = true
gitlab_rails['ldap_servers'] = {
'main' => {
'label' => 'LDAP',
'host' => 'ldap.example.com',
'port' => 3890,
'uid' => 'uid',
'base' => 'ou=people,dc=example,dc=com',
'encryption' => 'plain',
'bind_dn' => 'uid=bind_user,ou=people,dc=example,dc=com',
'password' => '<bind user password>',
'active_directory' => false,
'user_filter' => '(&(objectclass=person)(memberof=cn=git_user,ou=groups,dc=example,dc=com))',
'attributes' => {
'username' => 'uid',
'email' => 'mail',
'name' => 'displayName',
'first_name' => 'givenName',
'last_name' => 'sn'
}
}
}
```

28
example_configs/grocy.md Normal file
View File

@@ -0,0 +1,28 @@
# Configuration for Grocy
Adjust the following values in the file `config/data/config.php` or add environment variables for them (prefixed with `GROCY_`).
NOTE: If the environment variables are not working (for example in the linuxserver.io Docker Image), you need to add `clear_env = no` under the `[www]` in `/config/php/www2.conf`.
Replace `dc=example,dc=com` with your LLDAP configured domain.
### AUTH_CLASS
Needs to be set to `Grocy\Middleware\LdapAuthMiddleware` in order to use LDAP
### LDAP_ADDRESS
The address of your ldap server, eg: `ldap://lldap.example.com:389`
### LDAP_BASE_DN
The base dn, usually points directly to the `people`, eg: `ou=people,dc=example,dc=com`
### LDAP_BIND_DN
The reader user for lldap, eg: `uid=ldap-reader,ou=people,dc=example,dc=com`
### LDAP_BIND_PW
The password for the reader user
### LDAP_USER_FILTER
The filter to use for the users, eg. for a separate group: `(&(objectClass=person)(memberof=cn=grocy_users,ou=groups,dc=example,dc=com))`
### LDAP_UID_ATTR
The user id attribute, should be `uid`

View File

@@ -16,9 +16,20 @@ homeassistant:
- type: homeassistant
- type: command_line
command: /config/lldap-ha-auth.sh
# Only allow users in the 'homeassistant_user' group to login.
# Change to ["https://lldap.example.com"] to allow all users
args: ["https://lldap.example.com", "homeassistant_user"]
# arguments: [<LDAP Host>, <regular user group>, <admin user group>, <local user group>]
# <regular user group>: Find users that has permission to access homeassistant, anyone inside
# this group will have the default 'system-users' permission in homeassistant.
#
# <admin user group>: Allow users in the <regular user group> to be assigned into 'system-admin' group.
# Anyone inside this group will not have the 'system-users' permission as only one permission group
# is allowed in homeassistant
#
# <local user group>: Users in the <local user group> (e.g., 'homeassistant_local') can only access
# homeassistant inside LAN network.
#
# Only the first argument is required. ["https://lldap.example.com"] allows all users to log in from
# anywhere and have 'system-users' permissions.
args: ["https://lldap.example.com", "homeassistant_user", "homeassistant_admin", "homeassistant_local"]
meta: true
```
3. Reload your config or restart Home Assistant

View File

@@ -0,0 +1,81 @@
# Configuration for Jenkins
## Jenkins base setup
To setup LLDAP for Jenkins navigate to Dashboard/Manage Jenkins/Security.
*Note: Jenkins LDAP plugin has to be installed!</br>*
*Note: "dc=example,dc=com" is default configuration, you should replace it with your base DN.*
1) Set **Security Realm** to **LDAP**
2) Click Add Server
3) Setup config fields as stated below
## Config fields
#### Server
*(This can be replaced by server ip/your domain etc.)*
```
ldap://example.com:3890
```
### Advanced Server Configuration Dropdown
#### root DN
```
dc=example,dc=com
```
#### Allow blank rootDN
```
true
```
#### User search base
```
ou=people
```
#### User search filter
```
uid={0}
```
#### Group search base
```
ou=groups
```
#### Group search filter
```
(& (cn={0})(objectclass=groupOfNames))
```
#### Group membership
Select Search for LDAP groups containing user and leave Group membership filter empty
#### Manager DN
Leave here your admin account
```
cn=admin,ou=people,dc=example,dc=com
```
#### Manager Password
Leave it as is
#### Display Name LDAP attribute
Leave cn as it inputs username
```
cn
```
#### Email Address LDAP attribute
```
mail
```
### Tips & Tricks
- Always use Test LDAP settings so you won't get locked out. It works without password.
- If you want to setup your permissions, go to Authorization setting and select Matrix-based security. Add group/user (it has to exist in LLDAP) and you can grant him permissions. Note that Overall Read forbids users to read jenkins and execute actions. Administer gives full rights.
### Useful links:
https://plugins.jenkins.io/ldap/</br>
https://www.jenkins.io/doc/book/security/managing-security/

19
example_configs/kasm.md Normal file
View File

@@ -0,0 +1,19 @@
# Configuration for Kasm
In Kasm, go to *Admin* -> *Authentication* -> *LDAP* and add a configuration.
- *Name*: whatever you like
- *Url* is your lldap host (or IP) and port, e.g. `ldap://lldap.example.com:3890`
- *Search Base* is is your base dn, e.g `dc=example,dc=com`
- *Search Filter* is `(&(objectClass=person)(uid={0})(memberof=cn=kasm,ou=groups,dc=example,dc=com))`. Replace `cn=kasm,ou=groups,dc=example,dc=com` with the dn to the group necessary to login to Kasm.
- *Group Membership Filter* `(&(objectClass=groupOfUniqueNames)(member={0}))`
- *Email attribute* `mail`
- *Service Account DN* a lldap user, preferably not a admin but a member of the group `lldap_strict readonly`. Mine is called `cn=query,ou=people,dc=example,dc=com`
- *Service Account Password*: querys password
- Activate *Search Subtree*, *Auto Create App User* and *Enabled*
- under *Attribute Mapping* you can map the following:
- *Email* -> `mail`
- *First Name* -> `givenname`
- *Last Name* -> `sn`
- If you want to map groups from your lldap to Kasm, edit the group, scroll to *SSO Group Mappings* and add a new SSO mapping:
- select your lldap as provider
- *Group Attributes* is the full DN of your group, e.g. `cn=kasm_moreaccess,ou=groups,dc=example,dc=com`

View File

@@ -66,5 +66,26 @@ fi
DISPLAY_NAME=$(jq -r .displayName <<< $USER_JSON)
IS_ADMIN=false
if [[ ! -z "$3" ]] && jq -e '.groups|map(.displayName)|index("'"$3"'")' <<< "$USER_JSON" > /dev/null 2>&1; then
IS_ADMIN=true
fi
IS_LOCAL=false
if [[ ! -z "$4" ]] && jq -e '.groups|map(.displayName)|index("'"$4"'")' <<< "$USER_JSON" > /dev/null 2>&1; then
IS_LOCAL=true
fi
[[ ! -z "$DISPLAY_NAME" ]] && echo "name = $DISPLAY_NAME"
if [[ "$IS_ADMIN" = true ]]; then
echo "group = system-admin"
else
echo "group = system-users"
fi
if [[ "$IS_LOCAL" = true ]]; then
echo "local_only = true"
else
echo "local_only = false"
fi

View File

@@ -1,6 +1,6 @@
[Unit]
Description=Nitnelave LLDAP
Documentation=https://github.com/nitnelave/lldap
Documentation=https://github.com/lldap/lldap
# Only sqlite
After=network.target

View File

@@ -0,0 +1,96 @@
# Mailserver Docker
[Docker-mailserver](https://docker-mailserver.github.io/docker-mailserver/latest/) is a Production-ready full-stack but simple mail server (SMTP, IMAP, LDAP, Antispam, Antivirus, etc.) running inside a container.
To integrate with LLDAP, ensure you correctly adjust the `docker-mailserver` container environment values.
## Compose File Sample
```yaml
version: "3.9"
services:
lldap:
image: lldap/lldap:stable
ports:
- "3890:3890"
- "17170:17170"
volumes:
- "lldap_data:/data"
environment:
- VERBOSE=true
- TZ=Etc/UTC
- LLDAP_JWT_SECRET=yourjwt
- LLDAP_LDAP_USER_PASS=adminpassword
- LLDAP_LDAP_BASE_DN=dc=example,dc=com
mailserver:
image: ghcr.io/docker-mailserver/docker-mailserver:latest
container_name: mailserver
hostname: mail.example.com
ports:
- "25:25" # SMTP (explicit TLS => STARTTLS)
- "143:143" # IMAP4 (explicit TLS => STARTTLS)
- "465:465" # ESMTP (implicit TLS)
- "587:587" # ESMTP (explicit TLS => STARTTLS)
- "993:993" # IMAP4 (implicit TLS)
volumes:
- mailserver-data:/var/mail
- mailserver-state:/var/mail-state
- mailserver-config:/tmp/docker-mailserver/
- /etc/localtime:/etc/localtime:ro
restart: always
stop_grace_period: 1m
healthcheck:
test: "ss --listening --tcp | grep -P 'LISTEN.+:smtp' || exit 1"
timeout: 3s
retries: 0
environment:
- LOG_LEVEL=debug
- SUPERVISOR_LOGLEVEL=debug
- SPAMASSASSIN_SPAM_TO_INBOX=1
- ENABLE_FAIL2BAN=0
- ENABLE_AMAVIS=0
- SPOOF_PROTECTION=1
- ENABLE_OPENDKIM=0
- ENABLE_OPENDMARC=0
# >>> Postfix LDAP Integration
- ACCOUNT_PROVISIONER=LDAP
- LDAP_SERVER_HOST=lldap:3890
- LDAP_SEARCH_BASE=dc=example,dc=com
- LDAP_BIND_DN=uid=admin,ou=people,dc=example,dc=com
- LDAP_BIND_PW=adminpassword
- LDAP_QUERY_FILTER_USER=(&(objectClass=inetOrgPerson)(|(uid=%u)(mail=%u)))
- LDAP_QUERY_FILTER_GROUP=(&(objectClass=groupOfUniqueNames)(uid=%s))
- LDAP_QUERY_FILTER_ALIAS=(&(objectClass=inetOrgPerson)(|(uid=%u)(mail=%u)))
- LDAP_QUERY_FILTER_DOMAIN=((mail=*@%s))
# <<< Postfix LDAP Integration
# >>> Dovecot LDAP Integration
- DOVECOT_AUTH_BIND=yes
- DOVECOT_USER_FILTER=(&(objectClass=inetOrgPerson)(|(uid=%u)(mail=%u)))
- DOVECOT_USER_ATTRS==uid=5000,=gid=5000,=home=/var/mail/%Ln,=mail=maildir:~/Maildir
- POSTMASTER_ADDRESS=postmaster@d3n.com
cap_add:
- SYS_PTRACE
- NET_ADMIN # For Fail2Ban to work
roundcubemail:
image: roundcube/roundcubemail:latest
container_name: roundcubemail
restart: always
volumes:
- roundcube_data:/var/www/html
ports:
- "9002:80"
environment:
- ROUNDCUBEMAIL_DB_TYPE=sqlite
- ROUNDCUBEMAIL_SKIN=elastic
- ROUNDCUBEMAIL_DEFAULT_HOST=mailserver # IMAP
- ROUNDCUBEMAIL_SMTP_SERVER=mailserver # SMTP
volumes:
mailserver-data:
mailserver-config:
mailserver-state:
lldap_data:
roundcube_data:
```

View File

@@ -0,0 +1,15 @@
## ADD after values in the existing .env file.
## This example uses the unsecured 3890 port. For ldaps, set LDAP_METHOD=simple_tls and LDAP_PORT=6360
## For more details, see https://github.com/joylarkin/mastodon-documentation/blob/master/Running-Mastodon/Enabling-LDAP-login.md
LDAP_ENABLED=true
LDAP_METHOD=plain
LDAP_HOST=lldap
LDAP_PORT=3890
LDAP_BASE=dc=domain,dc=com
LDAP_BIND_DN=uid=admin,ou=people,dc=domain,dc=com
LDAP_PASSWORD=<lldap_admin_password_here>
LDAP_UID=uid
LDAP_MAIL=mail
LDAP_UID_CONVERSION_ENABLED=true
# match username or mail to authenticate, and onlow allow users belonging to group 'mastodon'
LDAP_SEARCH_FILTER=(&(memberof=cn=mastodon,ou=groups,dc=domain,dc=com)(|(%{uid}=%{email})(%{mail}=%{email})))

View File

@@ -2,7 +2,7 @@
If you're here, there are some assumptions being made about access and capabilities you have on your system:
1. You have Authelia up and running, understand its functionality, and have read through the documentation.
2. You have [LLDAP](https://github.com/nitnelave/lldap) up and running.
2. You have [LLDAP](https://github.com/lldap/lldap) up and running.
3. You have Nextcloud and LLDAP communicating and without any config errors. See the [example config for Nextcloud](nextcloud.md)
## Authelia
@@ -87,4 +87,4 @@ If this is set to *true* then the user flow will _skip_ the login page and autom
### Conclusion
And that's it! Assuming all the settings that worked for me, work for you, you should be able to login using OpenID Connect via Authelia. If you find any errors, it's a good idea to keep a document of all your settings from Authelia/Nextcloud/LLDAP etc so that you can easily reference and ensure everything lines up.
If you have any issues, please create a [discussion](https://github.com/nitnelave/lldap/discussions) or join the [Discord](https://discord.gg/h5PEdRMNyP).
If you have any issues, please create a [discussion](https://github.com/lldap/lldap/discussions) or join the [Discord](https://discord.gg/h5PEdRMNyP).

View File

@@ -1,7 +1,38 @@
# Configuration for Seafile
Seafile's LDAP interface requires a unique, immutable user identifier in the format of `username@domain`. Since LLDAP does not provide an attribute like `userPrincipalName`, the only attribute that somewhat qualifies is therefore `mail`. However, using `mail` as the user identifier results in the issue that Seafile will treat you as an entirely new user if you change your email address through LLDAP. If this is not an issue for you, you can configure LLDAP as an authentication source in Seafile directly. A better but more elaborate way to use Seafile with LLDAP is by using Authelia as an intermediary. This document will guide you through both setups.
Seafile can be bridged to LLDAP directly, or by using Authelia as an intermediary. This document will guide you through both setups.
## Configuring Seafile v11.0+ to use LLDAP directly
Starting Seafile v11.0 :
- CCNET module doesn't exist anymore
- More flexibility is given to authenticate in seafile : ID binding can now be different from user email, so LLDAP UID can be used.
Add the following to your `seafile/conf/seahub_settings.py` :
```
ENABLE_LDAP = True
LDAP_SERVER_URL = 'ldap://192.168.1.100:3890'
LDAP_BASE_DN = 'ou=people,dc=example,dc=com'
LDAP_ADMIN_DN = 'uid=admin,ou=people,dc=example,dc=com'
LDAP_ADMIN_PASSWORD = 'CHANGE_ME'
LDAP_PROVIDER = 'ldap'
LDAP_LOGIN_ATTR = 'uid'
LDAP_CONTACT_EMAIL_ATTR = 'mail'
LDAP_USER_ROLE_ATTR = ''
LDAP_USER_FIRST_NAME_ATTR = 'givenName'
LDAP_USER_LAST_NAME_ATTR = 'sn'
LDAP_USER_NAME_REVERSE = False
```
* Replace `192.168.1.100:3890` with your LLDAP server's ip/hostname and port.
* Replace every instance of `dc=example,dc=com` with your configured domain.
After restarting the Seafile server, users should be able to log in with their UID and password.
Note : There is currently no ldap binding for users' avatar. If interested, do [mention it](https://forum.seafile.com/t/feature-request-avatar-picture-from-ldap/3350/6) to the developers to give more credit to the feature.
## Configuring Seafile (prior to v11.0) to use LLDAP directly
**Note for Seafile before v11:** Seafile's LDAP interface used to require a unique, immutable user identifier in the format of `username@domain`. This isn't true starting Seafile v11.0 and ulterior versions (see previous section Configuring Seafile v11.0+ §).
For Seafile instances prior to v11, since LLDAP does not provide an attribute like `userPrincipalName`, the only attribute that somewhat qualifies is therefore `mail`. However, using `mail` as the user identifier results in the issue that Seafile will treat you as an entirely new user if you change your email address through LLDAP.
## Configuring Seafile to use LLDAP directly
Add the following to your `seafile/conf/ccnet.conf` file:
```
[LDAP]
@@ -86,4 +117,4 @@ OAUTH_ATTRIBUTE_MAP = {
}
```
Restart both your Authelia and Seafile server. You should see a "Single Sign-On" button on Seafile's login page. Clicking it should redirect you to Authelia. If you use the [example config for Authelia](authelia_config.yml), you should be able to log in using your LLDAP User ID.
Restart both your Authelia and Seafile server. You should see a "Single Sign-On" button on Seafile's login page. Clicking it should redirect you to Authelia. If you use the [example config for Authelia](authelia_config.yml), you should be able to log in using your LLDAP User ID.

View File

@@ -0,0 +1,16 @@
<!-- Append at the end of the <entry> sections in traccar.xml -->
<entry key='ldap.enable'>true</entry>
<!-- Important: the LDAP port must be specified in both ldap.url and ldap.port -->
<entry key='ldap.url'>ldap://lldap:3890</entry>
<entry key='ldap.port'>3890</entry>
<entry key='ldap.user'>UID=admin,OU=people,DC=domain,DC=com</entry>
<entry key='ldap.password'>BIND_USER_PASSWORD_HERE</entry>
<entry key='ldap.force'>true</entry>
<entry key='ldap.base'>OU=people,DC=domain,DC=com</entry>
<entry key='ldap.idAttribute'>uid</entry>
<entry key='ldap.nameAttribute'>cn</entry>
<entry key='ldap.mailAttribute'>mail</entry>
<!-- Only allow users belonging to group 'traccar' to login -->
<entry key='ldap.searchFilter'>(&amp;(|(uid=:login)(mail=:login))(memberOf=cn=traccar,ou=groups,dc=domain,dc=com))</entry>
<!-- Make new users administrators if they belong to group 'lldap_admin' -->
<entry key='ldap.adminFilter'>(&amp;(|(uid=:login)(mail=:login))(memberOf=cn=lldap_admin,ou=groups,dc=domain,dc=com))</entry>

View File

@@ -49,7 +49,7 @@ mail
```
### Display Name Field Mapping
```
givenname
cn
```
### Avatar Picture Field Mapping
```

View File

@@ -0,0 +1,51 @@
# Configuration for Zitadel
In Zitadel, go to `Instance > Settings` for instance-wide LDAP setup or `<Organization Name> > Settings` for organization-wide LDAP setup.
## Identity Providers Setup
Click `Identity Providers` and select `Active Directory/LDAP`.
**Group filter is not supported in `Zitadel` at the time of writing.**
Replace every instance of `dc=example,dc=com` with your configured domain.
### Connection
* Name: The name to identify your identity provider
* Servers: `ldaps://<FQDN or Host IP>:<Port for LADPS>` or `ldap://<FQDN or Host IP>:<Port for LADP>`
* BaseDn: `dc=example,dc=com`
* BindDn: `cn=admin,ou=people,dc=example,dc=com`. It is recommended that you create a separate user account (e.g, `bind_user`) instead of `admin` for sharing Bind credentials with other services. The `bind_user` should be a member of the `lldap_strict_readonly` group to limit access to your LDAP configuration in LLDAP.
* Bind Password: `<user password>`
### User binding
* Userbase: `dn`
* User filters: `uid`. `mail` will not work.
* User Object Classes: `person`
### LDAP Attributes
* ID attribute: `uid`
* displayName attribute: `cn`
* Email attribute: `mail`
* Given name attribute: `givenName`
* Family name attribute: `lastName`
* Preferred username attribute: `uid`
### optional
The following section applied to `Zitadel` only, nothing will change on `LLDAP` side.
* Account creation allowed [x]
* Account linking allowed [x]
**Either one of them or both of them must be enabled**
**DO NOT** enable `Automatic update` if you haven't setup a smtp server. Zitadel will update account's email and sent a verification code to verify the address.
If you don't have a smtp server setup correctly and the email adress of `ZITADEL Admin` is changed, you are **permanently** locked out.
`Automatic creation` can automatically create a new account without user interaction when `Given name attribute`, `Family name attribute`, `Email attribute`, and `Preferred username attribute` are presented.
## Enable Identity Provider
After clicking `Save`, you will be redirected to `Identity Providers` page.
Enable the LDAP by hovering onto the item and clicking the checkmark (`set as available`)
## Enable LDAP Login
Under `Settings`, select `Login Behavior and Security`
Under `Advanced`, enable `External IDP allowed`

12
generate_secrets.sh Executable file
View File

@@ -0,0 +1,12 @@
#! /bin/sh
function print_random () {
LC_ALL=C tr -dc 'A-Za-z0-9!#%&()*+,-./:;<=>?@[\]^_{|}~' </dev/urandom | head -c 32
}
/bin/echo -n "LLDAP_JWT_SECRET='"
print_random
echo "'"
/bin/echo -n "LLDAP_KEY_SEED='"
print_random
echo "'"

View File

@@ -10,7 +10,9 @@
## The host address that the LDAP server will be bound to.
## To enable IPv6 support, simply switch "ldap_host" to "::":
## To only allow connections from localhost (if you want to restrict to local self-hosted services),
## change it to "127.0.0.1" ("::1" in case of IPv6)".
## change it to "127.0.0.1" ("::1" in case of IPv6).
## If LLDAP server is running in docker, set it to "0.0.0.0" ("::" for IPv6) to allow connections
## originating from outside the container.
#ldap_host = "0.0.0.0"
## The port on which to have the LDAP server.
@@ -19,7 +21,9 @@
## The host address that the HTTP server will be bound to.
## To enable IPv6 support, simply switch "http_host" to "::".
## To only allow connections from localhost (if you want to restrict to local self-hosted services),
## change it to "127.0.0.1" ("::1" in case of IPv6)".
## change it to "127.0.0.1" ("::1" in case of IPv6).
## If LLDAP server is running in docker, set it to "0.0.0.0" ("::" for IPv6) to allow connections
## originating from outside the container.
#http_host = "0.0.0.0"
## The port on which to have the HTTP server, for user login and
@@ -74,6 +78,12 @@
## is just the default one.
#ldap_user_pass = "REPLACE_WITH_PASSWORD"
## Force reset of the admin password.
## Break glass in case of emergency: if you lost the admin password, you
## can set this to true to force a reset of the admin password to the value
## of ldap_user_pass above.
# force_reset_admin_password = false
## Database URL.
## This encodes the type of database (SQlite, MySQL, or PostgreSQL)
## , the path, the user, password, and sometimes the mode (when
@@ -88,21 +98,20 @@
database_url = "sqlite:///data/users.db?mode=rwc"
## Private key file.
## Not recommended, use key_seed instead.
## Contains the secret private key used to store the passwords safely.
## Note that even with a database dump and the private key, an attacker
## would still have to perform an (expensive) brute force attack to find
## each password.
## Randomly generated on first run if it doesn't exist.
## Alternatively, you can use key_seed to override this instead of relying on
## a file.
## Env variable: LLDAP_KEY_FILE
key_file = "/data/private_key"
#key_file = "/data/private_key"
## Seed to generate the server private key, see key_file above.
## This can be any random string, the recommendation is that it's at least 12
## characters long.
## Env variable: LLDAP_KEY_SEED
#key_seed = "RanD0m STR1ng"
key_seed = "RanD0m STR1ng"
## Ignored attributes.
## Some services will request attributes that are not present in LLDAP. When it

View File

@@ -194,6 +194,7 @@ impl TryFrom<ResultEntry> for User {
first_name,
last_name,
avatar: avatar.map(base64::encode),
attributes: None,
},
password,
entry.dn,

View File

@@ -136,7 +136,7 @@ fn try_login(
let ClientLoginStartResult { state, message } =
start_login(password, &mut rng).context("Could not initialize login")?;
let req = ClientLoginStartRequest {
username: username.to_owned(),
username: username.into(),
login_start_request: message,
};
let response = client

117
schema.graphql generated
View File

@@ -3,20 +3,20 @@ type AttributeValue {
value: [String!]!
}
input EqualityConstraint {
field: String!
value: String!
}
type Mutation {
createUser(user: CreateUserInput!): User!
createGroup(name: String!): Group!
createGroupWithDetails(request: CreateGroupInput!): Group!
updateUser(user: UpdateUserInput!): Success!
updateGroup(group: UpdateGroupInput!): Success!
addUserToGroup(userId: String!, groupId: Int!): Success!
removeUserFromGroup(userId: String!, groupId: Int!): Success!
deleteUser(userId: String!): Success!
deleteGroup(groupId: Int!): Success!
addUserAttribute(name: String!, attributeType: AttributeType!, isList: Boolean!, isVisible: Boolean!, isEditable: Boolean!): Success!
addGroupAttribute(name: String!, attributeType: AttributeType!, isList: Boolean!, isVisible: Boolean!, isEditable: Boolean!): Success!
deleteUserAttribute(name: String!): Success!
deleteGroupAttribute(name: String!): Success!
}
type Group {
@@ -46,17 +46,6 @@ input RequestFilter {
"DateTime"
scalar DateTimeUtc
type Schema {
userSchema: AttributeList!
groupSchema: AttributeList!
}
"The fields that can be updated for a group."
input UpdateGroupInput {
id: Int!
displayName: String
}
type Query {
apiVersion: String!
user(userId: String!): User!
@@ -73,7 +62,79 @@ input CreateUserInput {
displayName: String
firstName: String
lastName: String
avatar: String
"Base64 encoded JpegPhoto." avatar: String
"User-defined attributes." attributes: [AttributeValueInput!]
}
type AttributeSchema {
name: String!
attributeType: AttributeType!
isList: Boolean!
isVisible: Boolean!
isEditable: Boolean!
isHardcoded: Boolean!
}
"The fields that can be updated for a user."
input UpdateUserInput {
id: String!
email: String
displayName: String
firstName: String
lastName: String
"Base64 encoded JpegPhoto." avatar: String
"""
Attribute names to remove.
They are processed before insertions.
""" removeAttributes: [String!]
"""
Inserts or updates the given attributes.
For lists, the entire list must be provided.
""" insertAttributes: [AttributeValueInput!]
}
input EqualityConstraint {
field: String!
value: String!
}
type Schema {
userSchema: AttributeList!
groupSchema: AttributeList!
}
"The fields that can be updated for a group."
input UpdateGroupInput {
"The group ID." id: Int!
"The new display name." displayName: String
"""
Attribute names to remove.
They are processed before insertions.
""" removeAttributes: [String!]
"""
Inserts or updates the given attributes.
For lists, the entire list must be provided.
""" insertAttributes: [AttributeValueInput!]
}
input AttributeValueInput {
"""
The name of the attribute. It must be present in the schema, and the type informs how
to interpret the values.
""" name: String!
"""
The values of the attribute.
If the attribute is not a list, the vector must contain exactly one element.
Integers (signed 64 bits) are represented as strings.
Dates are represented as strings in RFC3339 format, e.g. "2019-10-12T07:20:50.52Z".
JpegPhotos are represented as base64 encoded strings. They must be valid JPEGs.
""" value: [String!]!
}
"The details required to create a group."
input CreateGroupInput {
displayName: String!
"User-defined attributes." attributes: [AttributeValueInput!]
}
type User {
@@ -95,29 +156,17 @@ type AttributeList {
attributes: [AttributeSchema!]!
}
type AttributeSchema {
name: String!
attributeType: String!
isList: Boolean!
isVisible: Boolean!
isEditable: Boolean!
isHardcoded: Boolean!
enum AttributeType {
STRING
INTEGER
JPEG_PHOTO
DATE_TIME
}
type Success {
ok: Boolean!
}
"The fields that can be updated for a user."
input UpdateUserInput {
id: String!
email: String
displayName: String
firstName: String
lastName: String
avatar: String
}
schema {
query: Query
mutation: Mutation

View File

@@ -8,7 +8,7 @@ keywords = ["cli", "ldap", "graphql", "server", "authentication"]
license = "GPL-3.0-only"
name = "lldap"
repository = "https://github.com/lldap/lldap"
version = "0.5.0"
version = "0.5.1-alpha"
[dependencies]
actix = "0.13"
@@ -25,6 +25,7 @@ base64 = "0.21"
bincode = "1.3"
cron = "*"
derive_builder = "0.12"
derive_more = "0.99"
figment_file_provider_adapter = "0.1"
futures = "*"
futures-util = "*"
@@ -34,7 +35,7 @@ itertools = "0.10"
juniper = "0.15"
jwt = "0.16"
lber = "0.4.1"
ldap3_proto = "^0.4"
ldap3_proto = "^0.4.3"
log = "*"
orion = "0.17"
rand_chacha = "0.3"
@@ -53,7 +54,7 @@ tracing-actix-web = "0.7"
tracing-attributes = "^0.1.21"
tracing-log = "*"
urlencoding = "2"
webpki-roots = "*"
webpki-roots = "0.22.2"
[dependencies.chrono]
features = ["serde"]
@@ -78,6 +79,7 @@ version = "0.10.1"
[dependencies.lldap_auth]
path = "../auth"
features = ["opaque_server", "opaque_client", "sea_orm"]
[dependencies.opaque-ke]
version = "0.6"
@@ -162,3 +164,7 @@ features = ["file_locks"]
[dev-dependencies.uuid]
version = "1"
features = ["v4"]
[dev-dependencies.figment]
features = ["test"]
version = "*"

View File

@@ -0,0 +1,50 @@
use crate::domain::types::{AttributeType, JpegPhoto, Serialized};
use anyhow::{bail, Context as AnyhowContext};
pub fn deserialize_attribute_value(
value: &[String],
typ: AttributeType,
is_list: bool,
) -> anyhow::Result<Serialized> {
if !is_list && value.len() != 1 {
bail!("Attribute is not a list, but multiple values were provided",);
}
let parse_int = |value: &String| -> anyhow::Result<i64> {
value
.parse::<i64>()
.with_context(|| format!("Invalid integer value {}", value))
};
let parse_date = |value: &String| -> anyhow::Result<chrono::NaiveDateTime> {
Ok(chrono::DateTime::parse_from_rfc3339(value)
.with_context(|| format!("Invalid date value {}", value))?
.naive_utc())
};
let parse_photo = |value: &String| -> anyhow::Result<JpegPhoto> {
JpegPhoto::try_from(value.as_str()).context("Provided image is not a valid JPEG")
};
Ok(match (typ, is_list) {
(AttributeType::String, false) => Serialized::from(&value[0]),
(AttributeType::String, true) => Serialized::from(&value),
(AttributeType::Integer, false) => Serialized::from(&parse_int(&value[0])?),
(AttributeType::Integer, true) => Serialized::from(
&value
.iter()
.map(parse_int)
.collect::<anyhow::Result<Vec<_>>>()?,
),
(AttributeType::DateTime, false) => Serialized::from(&parse_date(&value[0])?),
(AttributeType::DateTime, true) => Serialized::from(
&value
.iter()
.map(parse_date)
.collect::<anyhow::Result<Vec<_>>>()?,
),
(AttributeType::JpegPhoto, false) => Serialized::from(&parse_photo(&value[0])?),
(AttributeType::JpegPhoto, true) => Serialized::from(
&value
.iter()
.map(parse_photo)
.collect::<anyhow::Result<Vec<_>>>()?,
),
})
}

View File

@@ -1,8 +1,8 @@
use crate::domain::{
error::Result,
types::{
AttributeType, Group, GroupDetails, GroupId, JpegPhoto, User, UserAndGroups, UserColumn,
UserId, Uuid,
AttributeName, AttributeType, AttributeValue, Email, Group, GroupDetails, GroupId,
GroupName, JpegPhoto, Serialized, User, UserAndGroups, UserColumn, UserId, Uuid,
},
};
use async_trait::async_trait;
@@ -54,10 +54,10 @@ pub enum UserRequestFilter {
UserId(UserId),
UserIdSubString(SubStringFilter),
Equality(UserColumn, String),
AttributeEquality(String, String),
AttributeEquality(AttributeName, Serialized),
SubString(UserColumn, SubStringFilter),
// Check if a user belongs to a group identified by name.
MemberOf(String),
MemberOf(GroupName),
// Same, by id.
MemberOfId(GroupId),
}
@@ -77,12 +77,13 @@ pub enum GroupRequestFilter {
And(Vec<GroupRequestFilter>),
Or(Vec<GroupRequestFilter>),
Not(Box<GroupRequestFilter>),
DisplayName(String),
DisplayName(GroupName),
DisplayNameSubString(SubStringFilter),
Uuid(Uuid),
GroupId(GroupId),
// Check if the group contains a user identified by uid.
Member(UserId),
AttributeEquality(AttributeName, Serialized),
}
impl From<bool> for GroupRequestFilter {
@@ -99,33 +100,44 @@ impl From<bool> for GroupRequestFilter {
pub struct CreateUserRequest {
// Same fields as User, but no creation_date, and with password.
pub user_id: UserId,
pub email: String,
pub email: Email,
pub display_name: Option<String>,
pub first_name: Option<String>,
pub last_name: Option<String>,
pub avatar: Option<JpegPhoto>,
pub attributes: Vec<AttributeValue>,
}
#[derive(PartialEq, Eq, Debug, Serialize, Deserialize, Clone, Default)]
pub struct UpdateUserRequest {
// Same fields as CreateUserRequest, but no with an extra layer of Option.
pub user_id: UserId,
pub email: Option<String>,
pub email: Option<Email>,
pub display_name: Option<String>,
pub first_name: Option<String>,
pub last_name: Option<String>,
pub avatar: Option<JpegPhoto>,
pub delete_attributes: Vec<AttributeName>,
pub insert_attributes: Vec<AttributeValue>,
}
#[derive(PartialEq, Eq, Debug, Serialize, Deserialize, Clone, Default)]
pub struct CreateGroupRequest {
pub display_name: GroupName,
pub attributes: Vec<AttributeValue>,
}
#[derive(PartialEq, Eq, Debug, Serialize, Deserialize, Clone)]
pub struct UpdateGroupRequest {
pub group_id: GroupId,
pub display_name: Option<String>,
pub display_name: Option<GroupName>,
pub delete_attributes: Vec<AttributeName>,
pub insert_attributes: Vec<AttributeValue>,
}
#[derive(PartialEq, Eq, Debug, Serialize, Deserialize, Clone)]
pub struct AttributeSchema {
pub name: String,
pub name: AttributeName,
//TODO: pub aliases: Vec<String>,
pub attribute_type: AttributeType,
pub is_list: bool,
@@ -134,16 +146,27 @@ pub struct AttributeSchema {
pub is_hardcoded: bool,
}
#[derive(PartialEq, Eq, Debug, Serialize, Deserialize, Clone)]
pub struct CreateAttributeRequest {
pub name: AttributeName,
pub attribute_type: AttributeType,
pub is_list: bool,
pub is_visible: bool,
pub is_editable: bool,
}
#[derive(PartialEq, Eq, Debug, Serialize, Deserialize, Clone)]
pub struct AttributeList {
pub attributes: Vec<AttributeSchema>,
}
impl AttributeList {
pub fn get_attribute_type(&self, name: &str) -> Option<(AttributeType, bool)> {
self.attributes
.iter()
.find(|a| a.name == name)
pub fn get_attribute_schema(&self, name: &AttributeName) -> Option<&AttributeSchema> {
self.attributes.iter().find(|a| a.name == *name)
}
pub fn get_attribute_type(&self, name: &AttributeName) -> Option<(AttributeType, bool)> {
self.get_attribute_schema(name)
.map(|a| (a.attribute_type, a.is_list))
}
}
@@ -160,20 +183,20 @@ pub trait LoginHandler: Send + Sync {
}
#[async_trait]
pub trait GroupListerBackendHandler: SchemaBackendHandler {
pub trait GroupListerBackendHandler: ReadSchemaBackendHandler {
async fn list_groups(&self, filters: Option<GroupRequestFilter>) -> Result<Vec<Group>>;
}
#[async_trait]
pub trait GroupBackendHandler: SchemaBackendHandler {
pub trait GroupBackendHandler: ReadSchemaBackendHandler {
async fn get_group_details(&self, group_id: GroupId) -> Result<GroupDetails>;
async fn update_group(&self, request: UpdateGroupRequest) -> Result<()>;
async fn create_group(&self, group_name: &str) -> Result<GroupId>;
async fn create_group(&self, request: CreateGroupRequest) -> Result<GroupId>;
async fn delete_group(&self, group_id: GroupId) -> Result<()>;
}
#[async_trait]
pub trait UserListerBackendHandler: SchemaBackendHandler {
pub trait UserListerBackendHandler: ReadSchemaBackendHandler {
async fn list_users(
&self,
filters: Option<UserRequestFilter>,
@@ -182,7 +205,7 @@ pub trait UserListerBackendHandler: SchemaBackendHandler {
}
#[async_trait]
pub trait UserBackendHandler: SchemaBackendHandler {
pub trait UserBackendHandler: ReadSchemaBackendHandler {
async fn get_user_details(&self, user_id: &UserId) -> Result<User>;
async fn create_user(&self, request: CreateUserRequest) -> Result<()>;
async fn update_user(&self, request: UpdateUserRequest) -> Result<()>;
@@ -193,10 +216,19 @@ pub trait UserBackendHandler: SchemaBackendHandler {
}
#[async_trait]
pub trait SchemaBackendHandler {
pub trait ReadSchemaBackendHandler {
async fn get_schema(&self) -> Result<Schema>;
}
#[async_trait]
pub trait SchemaBackendHandler: ReadSchemaBackendHandler {
async fn add_user_attribute(&self, request: CreateAttributeRequest) -> Result<()>;
async fn add_group_attribute(&self, request: CreateAttributeRequest) -> Result<()>;
// Note: It's up to the caller to make sure that the attribute is not hardcoded.
async fn delete_user_attribute(&self, name: &AttributeName) -> Result<()>;
async fn delete_group_attribute(&self, name: &AttributeName) -> Result<()>;
}
#[async_trait]
pub trait BackendHandler:
Send
@@ -205,6 +237,7 @@ pub trait BackendHandler:
+ UserBackendHandler
+ UserListerBackendHandler
+ GroupListerBackendHandler
+ ReadSchemaBackendHandler
+ SchemaBackendHandler
{
}

View File

@@ -1,19 +1,22 @@
use chrono::TimeZone;
use ldap3_proto::{
proto::LdapOp, LdapFilter, LdapPartialAttribute, LdapResultCode, LdapSearchResultEntry,
};
use tracing::{debug, instrument, warn};
use crate::domain::{
deserialize::deserialize_attribute_value,
handler::{GroupListerBackendHandler, GroupRequestFilter},
ldap::error::LdapError,
types::{Group, UserId, Uuid},
schema::{PublicSchema, SchemaGroupAttributeExtractor},
types::{AttributeName, AttributeType, Group, UserId, Uuid},
};
use super::{
error::LdapResult,
utils::{
expand_attribute_wildcards, get_group_id_from_distinguished_name,
get_user_id_from_distinguished_name, map_group_field, LdapInfo,
expand_attribute_wildcards, get_custom_attribute, get_group_id_from_distinguished_name,
get_user_id_from_distinguished_name, map_group_field, GroupFieldType, LdapInfo,
},
};
@@ -22,40 +25,57 @@ pub fn get_group_attribute(
base_dn_str: &str,
attribute: &str,
user_filter: &Option<UserId>,
ignored_group_attributes: &[String],
ignored_group_attributes: &[AttributeName],
schema: &PublicSchema,
) -> Option<Vec<Vec<u8>>> {
let attribute = attribute.to_ascii_lowercase();
let attribute_values = match attribute.as_str() {
"objectclass" => vec![b"groupOfUniqueNames".to_vec()],
let attribute = AttributeName::from(attribute);
let attribute_values = match map_group_field(&attribute, schema) {
GroupFieldType::ObjectClass => vec![b"groupOfUniqueNames".to_vec()],
// Always returned as part of the base response.
"dn" | "distinguishedname" => return None,
"cn" | "uid" | "id" => vec![group.display_name.clone().into_bytes()],
"entryuuid" | "uuid" => vec![group.uuid.to_string().into_bytes()],
"member" | "uniquemember" => group
GroupFieldType::Dn => return None,
GroupFieldType::EntryDn => {
vec![format!("uid={},ou=groups,{}", group.display_name, base_dn_str).into_bytes()]
}
GroupFieldType::DisplayName => vec![group.display_name.to_string().into_bytes()],
GroupFieldType::CreationDate => vec![chrono::Utc
.from_utc_datetime(&group.creation_date)
.to_rfc3339()
.into_bytes()],
GroupFieldType::Member => group
.users
.iter()
.filter(|u| user_filter.as_ref().map(|f| *u == f).unwrap_or(true))
.map(|u| format!("uid={},ou=people,{}", u, base_dn_str).into_bytes())
.collect(),
"1.1" => return None,
// We ignore the operational attribute wildcard
"+" => return None,
"*" => {
panic!(
"Matched {}, * should have been expanded into attribute list and * removed",
attribute
)
GroupFieldType::Uuid => vec![group.uuid.to_string().into_bytes()],
GroupFieldType::Attribute(attr, _, _) => {
get_custom_attribute::<SchemaGroupAttributeExtractor>(&group.attributes, &attr, schema)?
}
_ => {
if !ignored_group_attributes.contains(&attribute) {
warn!(
r#"Ignoring unrecognized group attribute: {}\n\
To disable this warning, add it to "ignored_group_attributes" in the config."#,
GroupFieldType::NoMatch => match attribute.as_str() {
"1.1" => return None,
// We ignore the operational attribute wildcard
"+" => return None,
"*" => {
panic!(
"Matched {}, * should have been expanded into attribute list and * removed",
attribute
);
)
}
return None;
}
_ => {
if ignored_group_attributes.contains(&attribute) {
return None;
}
get_custom_attribute::<SchemaGroupAttributeExtractor>(
&group.attributes,
&attribute,
schema,
).or_else(||{warn!(
r#"Ignoring unrecognized group attribute: {}\n\
To disable this warning, add it to "ignored_group_attributes" in the config."#,
attribute
);None})?
}
},
};
if attribute_values.len() == 1 && attribute_values[0].is_empty() {
None
@@ -82,7 +102,8 @@ fn make_ldap_search_group_result_entry(
base_dn_str: &str,
attributes: &[String],
user_filter: &Option<UserId>,
ignored_group_attributes: &[String],
ignored_group_attributes: &[AttributeName],
schema: &PublicSchema,
) -> LdapSearchResultEntry {
let expanded_attributes = expand_group_attribute_wildcards(attributes);
@@ -97,6 +118,7 @@ fn make_ldap_search_group_result_entry(
a,
user_filter,
ignored_group_attributes,
schema,
)?;
Some(LdapPartialAttribute {
atype: a.to_string(),
@@ -107,57 +129,79 @@ fn make_ldap_search_group_result_entry(
}
}
fn get_group_attribute_equality_filter(
field: &AttributeName,
typ: AttributeType,
is_list: bool,
value: &str,
) -> LdapResult<GroupRequestFilter> {
deserialize_attribute_value(&[value.to_owned()], typ, is_list)
.map_err(|e| LdapError {
code: LdapResultCode::Other,
message: format!("Invalid value for attribute {}: {}", field, e),
})
.map(|v| GroupRequestFilter::AttributeEquality(field.clone(), v))
}
fn convert_group_filter(
ldap_info: &LdapInfo,
filter: &LdapFilter,
schema: &PublicSchema,
) -> LdapResult<GroupRequestFilter> {
let rec = |f| convert_group_filter(ldap_info, f);
let rec = |f| convert_group_filter(ldap_info, f, schema);
match filter {
LdapFilter::Equality(field, value) => {
let field = &field.to_ascii_lowercase();
let value = &value.to_ascii_lowercase();
match field.as_str() {
"member" | "uniquemember" => {
let field = AttributeName::from(field.as_str());
let value = value.to_ascii_lowercase();
match map_group_field(&field, schema) {
GroupFieldType::DisplayName => Ok(GroupRequestFilter::DisplayName(value.into())),
GroupFieldType::Uuid => Ok(GroupRequestFilter::Uuid(
Uuid::try_from(value.as_str()).map_err(|e| LdapError {
code: LdapResultCode::InappropriateMatching,
message: format!("Invalid UUID: {:#}", e),
})?,
)),
GroupFieldType::Member => {
let user_name = get_user_id_from_distinguished_name(
value,
&value,
&ldap_info.base_dn,
&ldap_info.base_dn_str,
)?;
Ok(GroupRequestFilter::Member(user_name))
}
"objectclass" => Ok(GroupRequestFilter::from(matches!(
GroupFieldType::ObjectClass => Ok(GroupRequestFilter::from(matches!(
value.as_str(),
"groupofuniquenames" | "groupofnames"
))),
"dn" => Ok(get_group_id_from_distinguished_name(
value.to_ascii_lowercase().as_str(),
&ldap_info.base_dn,
&ldap_info.base_dn_str,
)
.map(GroupRequestFilter::DisplayName)
.unwrap_or_else(|_| {
warn!("Invalid dn filter on group: {}", value);
GroupRequestFilter::from(false)
})),
_ => match map_group_field(field) {
Some("display_name") => Ok(GroupRequestFilter::DisplayName(value.to_string())),
Some("uuid") => Ok(GroupRequestFilter::Uuid(
Uuid::try_from(value.as_str()).map_err(|e| LdapError {
code: LdapResultCode::InappropriateMatching,
message: format!("Invalid UUID: {:#}", e),
})?,
)),
_ => {
if !ldap_info.ignored_group_attributes.contains(field) {
warn!(
r#"Ignoring unknown group attribute "{:?}" in filter.\n\
GroupFieldType::Dn | GroupFieldType::EntryDn => {
Ok(get_group_id_from_distinguished_name(
value.as_str(),
&ldap_info.base_dn,
&ldap_info.base_dn_str,
)
.map(GroupRequestFilter::DisplayName)
.unwrap_or_else(|_| {
warn!("Invalid dn filter on group: {}", value);
GroupRequestFilter::from(false)
}))
}
GroupFieldType::NoMatch => {
if !ldap_info.ignored_group_attributes.contains(&field) {
warn!(
r#"Ignoring unknown group attribute "{}" in filter.\n\
To disable this warning, add it to "ignored_group_attributes" in the config."#,
field
);
}
Ok(GroupRequestFilter::from(false))
field
);
}
},
Ok(GroupRequestFilter::from(false))
}
GroupFieldType::Attribute(field, typ, is_list) => {
get_group_attribute_equality_filter(&field, typ, is_list, &value)
}
GroupFieldType::CreationDate => Err(LdapError {
code: LdapResultCode::UnwillingToPerform,
message: "Creation date filter for groups not supported".to_owned(),
}),
}
}
LdapFilter::And(filters) => Ok(GroupRequestFilter::And(
@@ -168,24 +212,23 @@ fn convert_group_filter(
)),
LdapFilter::Not(filter) => Ok(GroupRequestFilter::Not(Box::new(rec(filter)?))),
LdapFilter::Present(field) => {
let field = &field.to_ascii_lowercase();
Ok(GroupRequestFilter::from(
field == "objectclass"
|| field == "dn"
|| field == "distinguishedname"
|| map_group_field(field).is_some(),
))
let field = AttributeName::from(field.as_str());
Ok(GroupRequestFilter::from(!matches!(
map_group_field(&field, schema),
GroupFieldType::NoMatch
)))
}
LdapFilter::Substring(field, substring_filter) => {
let field = &field.to_ascii_lowercase();
match map_group_field(field.as_str()) {
Some("display_name") => Ok(GroupRequestFilter::DisplayNameSubString(
let field = AttributeName::from(field.as_str());
match map_group_field(&field, schema) {
GroupFieldType::DisplayName => Ok(GroupRequestFilter::DisplayNameSubString(
substring_filter.clone().into(),
)),
GroupFieldType::NoMatch => Ok(GroupRequestFilter::from(false)),
_ => Err(LdapError {
code: LdapResultCode::UnwillingToPerform,
message: format!(
"Unsupported group attribute for substring filter: {:?}",
"Unsupported group attribute for substring filter: \"{}\"",
field
),
}),
@@ -204,8 +247,9 @@ pub async fn get_groups_list<Backend: GroupListerBackendHandler>(
ldap_filter: &LdapFilter,
base: &str,
backend: &Backend,
schema: &PublicSchema,
) -> LdapResult<Vec<Group>> {
let filters = convert_group_filter(ldap_info, ldap_filter)?;
let filters = convert_group_filter(ldap_info, ldap_filter, schema)?;
debug!(?filters);
backend
.list_groups(Some(filters))
@@ -221,6 +265,7 @@ pub fn convert_groups_to_ldap_op<'a>(
attributes: &'a [String],
ldap_info: &'a LdapInfo,
user_filter: &'a Option<UserId>,
schema: &'a PublicSchema,
) -> impl Iterator<Item = LdapOp> + 'a {
groups.into_iter().map(move |g| {
LdapOp::SearchResultEntry(make_ldap_search_group_result_entry(
@@ -229,6 +274,7 @@ pub fn convert_groups_to_ldap_op<'a>(
attributes,
user_filter,
&ldap_info.ignored_group_attributes,
schema,
))
})
}

View File

@@ -5,7 +5,8 @@ use ldap3_proto::{
use tracing::{debug, instrument, warn};
use crate::domain::{
handler::{Schema, UserListerBackendHandler, UserRequestFilter},
deserialize::deserialize_attribute_value,
handler::{UserListerBackendHandler, UserRequestFilter},
ldap::{
error::{LdapError, LdapResult},
utils::{
@@ -13,7 +14,8 @@ use crate::domain::{
get_user_id_from_distinguished_name, map_user_field, LdapInfo, UserFieldType,
},
},
types::{GroupDetails, User, UserAndGroups, UserColumn, UserId},
schema::{PublicSchema, SchemaUserAttributeExtractor},
types::{AttributeName, AttributeType, GroupDetails, User, UserAndGroups, UserColumn, UserId},
};
pub fn get_user_attribute(
@@ -21,62 +23,79 @@ pub fn get_user_attribute(
attribute: &str,
base_dn_str: &str,
groups: Option<&[GroupDetails]>,
ignored_user_attributes: &[String],
schema: &Schema,
ignored_user_attributes: &[AttributeName],
schema: &PublicSchema,
) -> Option<Vec<Vec<u8>>> {
let attribute = attribute.to_ascii_lowercase();
let attribute_values = match attribute.as_str() {
"objectclass" => vec![
let attribute = AttributeName::from(attribute);
let attribute_values = match map_user_field(&attribute, schema) {
UserFieldType::ObjectClass => vec![
b"inetOrgPerson".to_vec(),
b"posixAccount".to_vec(),
b"mailAccount".to_vec(),
b"person".to_vec(),
],
// dn is always returned as part of the base response.
"dn" | "distinguishedname" => return None,
"uid" | "user_id" | "id" => vec![user.user_id.to_string().into_bytes()],
"entryuuid" | "uuid" => vec![user.uuid.to_string().into_bytes()],
"mail" | "email" => vec![user.email.clone().into_bytes()],
"givenname" | "first_name" | "firstname" => {
get_custom_attribute(&user.attributes, "first_name", schema)?
UserFieldType::Dn => return None,
UserFieldType::EntryDn => {
vec![format!("uid={},ou=people,{}", &user.user_id, base_dn_str).into_bytes()]
}
"sn" | "last_name" | "lastname" => {
get_custom_attribute(&user.attributes, "last_name", schema)?
}
"jpegphoto" | "avatar" => get_custom_attribute(&user.attributes, "avatar", schema)?,
"memberof" => groups
UserFieldType::MemberOf => groups
.into_iter()
.flatten()
.map(|id_and_name| {
format!("cn={},ou=groups,{}", &id_and_name.display_name, base_dn_str).into_bytes()
})
.collect(),
"cn" | "displayname" => vec![user.display_name.clone()?.into_bytes()],
"creationdate" | "creation_date" | "createtimestamp" | "modifytimestamp" => {
vec![chrono::Utc
.from_utc_datetime(&user.creation_date)
.to_rfc3339()
.into_bytes()]
UserFieldType::PrimaryField(UserColumn::UserId) => {
vec![user.user_id.to_string().into_bytes()]
}
"1.1" => return None,
// We ignore the operational attribute wildcard.
"+" => return None,
"*" => {
panic!(
"Matched {}, * should have been expanded into attribute list and * removed",
attribute
)
UserFieldType::PrimaryField(UserColumn::Email) => vec![user.email.to_string().into_bytes()],
UserFieldType::PrimaryField(
UserColumn::LowercaseEmail
| UserColumn::PasswordHash
| UserColumn::TotpSecret
| UserColumn::MfaType,
) => panic!("Should not get here"),
UserFieldType::PrimaryField(UserColumn::Uuid) => vec![user.uuid.to_string().into_bytes()],
UserFieldType::PrimaryField(UserColumn::DisplayName) => {
vec![user.display_name.clone()?.into_bytes()]
}
_ => {
if !ignored_user_attributes.contains(&attribute) {
warn!(
r#"Ignoring unrecognized group attribute: {}\n\
To disable this warning, add it to "ignored_user_attributes" in the config."#,
UserFieldType::PrimaryField(UserColumn::CreationDate) => vec![chrono::Utc
.from_utc_datetime(&user.creation_date)
.to_rfc3339()
.into_bytes()],
UserFieldType::Attribute(attr, _, _) => {
get_custom_attribute::<SchemaUserAttributeExtractor>(&user.attributes, &attr, schema)?
}
UserFieldType::NoMatch => match attribute.as_str() {
"1.1" => return None,
// We ignore the operational attribute wildcard.
"+" => return None,
"*" => {
panic!(
"Matched {}, * should have been expanded into attribute list and * removed",
attribute
);
)
}
return None;
}
_ => {
if ignored_user_attributes.contains(&attribute) {
return None;
}
get_custom_attribute::<SchemaUserAttributeExtractor>(
&user.attributes,
&attribute,
schema,
)
.or_else(|| {
warn!(
r#"Ignoring unrecognized group attribute: {}\n\
To disable this warning, add it to "ignored_user_attributes" in the config."#,
attribute
);
None
})?
}
},
};
if attribute_values.len() == 1 && attribute_values[0].is_empty() {
None
@@ -102,8 +121,8 @@ fn make_ldap_search_user_result_entry(
base_dn_str: &str,
attributes: &[String],
groups: Option<&[GroupDetails]>,
ignored_user_attributes: &[String],
schema: &Schema,
ignored_user_attributes: &[AttributeName],
schema: &PublicSchema,
) -> LdapSearchResultEntry {
let expanded_attributes = expand_user_attribute_wildcards(attributes);
let dn = format!("uid={},ou=people,{}", user.user_id.as_str(), base_dn_str);
@@ -129,8 +148,26 @@ fn make_ldap_search_user_result_entry(
}
}
fn convert_user_filter(ldap_info: &LdapInfo, filter: &LdapFilter) -> LdapResult<UserRequestFilter> {
let rec = |f| convert_user_filter(ldap_info, f);
fn get_user_attribute_equality_filter(
field: &AttributeName,
typ: AttributeType,
is_list: bool,
value: &str,
) -> LdapResult<UserRequestFilter> {
deserialize_attribute_value(&[value.to_owned()], typ, is_list)
.map_err(|e| LdapError {
code: LdapResultCode::Other,
message: format!("Invalid value for attribute {}: {}", field, e),
})
.map(|v| UserRequestFilter::AttributeEquality(field.clone(), v))
}
fn convert_user_filter(
ldap_info: &LdapInfo,
filter: &LdapFilter,
schema: &PublicSchema,
) -> LdapResult<UserRequestFilter> {
let rec = |f| convert_user_filter(ldap_info, f, schema);
match filter {
LdapFilter::And(filters) => Ok(UserRequestFilter::And(
filters.iter().map(rec).collect::<LdapResult<_>>()?,
@@ -140,71 +177,72 @@ fn convert_user_filter(ldap_info: &LdapInfo, filter: &LdapFilter) -> LdapResult<
)),
LdapFilter::Not(filter) => Ok(UserRequestFilter::Not(Box::new(rec(filter)?))),
LdapFilter::Equality(field, value) => {
let field = &field.to_ascii_lowercase();
match field.as_str() {
"memberof" => Ok(UserRequestFilter::MemberOf(
let field = AttributeName::from(field.as_str());
let value = value.to_ascii_lowercase();
match map_user_field(&field, schema) {
UserFieldType::PrimaryField(UserColumn::UserId) => {
Ok(UserRequestFilter::UserId(UserId::new(&value)))
}
UserFieldType::PrimaryField(field) => Ok(UserRequestFilter::Equality(field, value)),
UserFieldType::Attribute(field, typ, is_list) => {
get_user_attribute_equality_filter(&field, typ, is_list, &value)
}
UserFieldType::NoMatch => {
if !ldap_info.ignored_user_attributes.contains(&field) {
warn!(
r#"Ignoring unknown user attribute "{}" in filter.\n\
To disable this warning, add it to "ignored_user_attributes" in the config"#,
field
);
}
Ok(UserRequestFilter::from(false))
}
UserFieldType::ObjectClass => Ok(UserRequestFilter::from(matches!(
value.as_str(),
"person" | "inetorgperson" | "posixaccount" | "mailaccount"
))),
UserFieldType::MemberOf => Ok(UserRequestFilter::MemberOf(
get_group_id_from_distinguished_name(
&value.to_ascii_lowercase(),
&value,
&ldap_info.base_dn,
&ldap_info.base_dn_str,
)?,
)),
"objectclass" => Ok(UserRequestFilter::from(matches!(
value.to_ascii_lowercase().as_str(),
"person" | "inetorgperson" | "posixaccount" | "mailaccount"
))),
"dn" => Ok(get_user_id_from_distinguished_name(
value.to_ascii_lowercase().as_str(),
&ldap_info.base_dn,
&ldap_info.base_dn_str,
)
.map(UserRequestFilter::UserId)
.unwrap_or_else(|_| {
warn!("Invalid dn filter on user: {}", value);
UserRequestFilter::from(false)
})),
_ => match map_user_field(field) {
UserFieldType::PrimaryField(UserColumn::UserId) => {
Ok(UserRequestFilter::UserId(UserId::new(value)))
}
UserFieldType::PrimaryField(field) => {
Ok(UserRequestFilter::Equality(field, value.clone()))
}
UserFieldType::Attribute(field) => Ok(UserRequestFilter::AttributeEquality(
field.to_owned(),
value.clone(),
)),
UserFieldType::NoMatch => {
if !ldap_info.ignored_user_attributes.contains(field) {
warn!(
r#"Ignoring unknown user attribute "{}" in filter.\n\
To disable this warning, add it to "ignored_user_attributes" in the config"#,
field
);
}
Ok(UserRequestFilter::from(false))
}
},
UserFieldType::EntryDn | UserFieldType::Dn => {
Ok(get_user_id_from_distinguished_name(
value.as_str(),
&ldap_info.base_dn,
&ldap_info.base_dn_str,
)
.map(UserRequestFilter::UserId)
.unwrap_or_else(|_| {
warn!("Invalid dn filter on user: {}", value);
UserRequestFilter::from(false)
}))
}
}
}
LdapFilter::Present(field) => {
let field = &field.to_ascii_lowercase();
let field = AttributeName::from(field.as_str());
// Check that it's a field we support.
Ok(UserRequestFilter::from(
field == "objectclass"
|| field == "dn"
|| field == "distinguishedname"
|| !matches!(map_user_field(field), UserFieldType::NoMatch),
field.as_str() == "objectclass"
|| field.as_str() == "dn"
|| field.as_str() == "distinguishedname"
|| !matches!(map_user_field(&field, schema), UserFieldType::NoMatch),
))
}
LdapFilter::Substring(field, substring_filter) => {
let field = &field.to_ascii_lowercase();
match map_user_field(field.as_str()) {
let field = AttributeName::from(field.as_str());
match map_user_field(&field, schema) {
UserFieldType::PrimaryField(UserColumn::UserId) => Ok(
UserRequestFilter::UserIdSubString(substring_filter.clone().into()),
),
UserFieldType::NoMatch
| UserFieldType::Attribute(_)
UserFieldType::Attribute(_, _, _)
| UserFieldType::ObjectClass
| UserFieldType::MemberOf
| UserFieldType::Dn
| UserFieldType::EntryDn
| UserFieldType::PrimaryField(UserColumn::CreationDate)
| UserFieldType::PrimaryField(UserColumn::Uuid) => Err(LdapError {
code: LdapResultCode::UnwillingToPerform,
@@ -213,6 +251,7 @@ fn convert_user_filter(ldap_info: &LdapInfo, filter: &LdapFilter) -> LdapResult<
field
),
}),
UserFieldType::NoMatch => Ok(UserRequestFilter::from(false)),
UserFieldType::PrimaryField(field) => Ok(UserRequestFilter::SubString(
field,
substring_filter.clone().into(),
@@ -237,8 +276,9 @@ pub async fn get_user_list<Backend: UserListerBackendHandler>(
request_groups: bool,
base: &str,
backend: &Backend,
schema: &PublicSchema,
) -> LdapResult<Vec<UserAndGroups>> {
let filters = convert_user_filter(ldap_info, ldap_filter)?;
let filters = convert_user_filter(ldap_info, ldap_filter, schema)?;
debug!(?filters);
backend
.list_users(Some(filters), request_groups)
@@ -253,7 +293,7 @@ pub fn convert_users_to_ldap_op<'a>(
users: Vec<UserAndGroups>,
attributes: &'a [String],
ldap_info: &'a LdapInfo,
schema: &'a Schema,
schema: &'a PublicSchema,
) -> impl Iterator<Item = LdapOp> + 'a {
users.into_iter().map(move |u| {
LdapOp::SearchResultEntry(make_ldap_search_user_result_entry(

View File

@@ -4,9 +4,12 @@ use ldap3_proto::{proto::LdapSubstringFilter, LdapResultCode};
use tracing::{debug, instrument, warn};
use crate::domain::{
handler::{Schema, SubStringFilter},
handler::SubStringFilter,
ldap::error::{LdapError, LdapResult},
types::{AttributeType, AttributeValue, JpegPhoto, UserColumn, UserId},
schema::{PublicSchema, SchemaAttributeExtractor},
types::{
AttributeName, AttributeType, AttributeValue, GroupName, JpegPhoto, UserColumn, UserId,
},
};
impl From<LdapSubstringFilter> for SubStringFilter {
@@ -102,8 +105,8 @@ pub fn get_group_id_from_distinguished_name(
dn: &str,
base_tree: &[(String, String)],
base_dn_str: &str,
) -> LdapResult<String> {
get_id_from_distinguished_name(dn, base_tree, base_dn_str, true)
) -> LdapResult<GroupName> {
get_id_from_distinguished_name(dn, base_tree, base_dn_str, true).map(GroupName::from)
}
#[instrument(skip(all_attribute_keys), level = "debug")]
@@ -155,50 +158,97 @@ pub fn is_subtree(subtree: &[(String, String)], base_tree: &[(String, String)])
pub enum UserFieldType {
NoMatch,
ObjectClass,
MemberOf,
Dn,
EntryDn,
PrimaryField(UserColumn),
Attribute(&'static str),
Attribute(AttributeName, AttributeType, bool),
}
pub fn map_user_field(field: &str) -> UserFieldType {
assert!(field == field.to_ascii_lowercase());
match field {
pub fn map_user_field(field: &AttributeName, schema: &PublicSchema) -> UserFieldType {
match field.as_str() {
"memberof" | "ismemberof" => UserFieldType::MemberOf,
"objectclass" => UserFieldType::ObjectClass,
"dn" | "distinguishedname" => UserFieldType::Dn,
"entrydn" => UserFieldType::EntryDn,
"uid" | "user_id" | "id" => UserFieldType::PrimaryField(UserColumn::UserId),
"mail" | "email" => UserFieldType::PrimaryField(UserColumn::Email),
"cn" | "displayname" | "display_name" => {
UserFieldType::PrimaryField(UserColumn::DisplayName)
}
"givenname" | "first_name" | "firstname" => UserFieldType::Attribute("first_name"),
"sn" | "last_name" | "lastname" => UserFieldType::Attribute("last_name"),
"avatar" | "jpegphoto" => UserFieldType::Attribute("avatar"),
"givenname" | "first_name" | "firstname" => UserFieldType::Attribute(
AttributeName::from("first_name"),
AttributeType::String,
false,
),
"sn" | "last_name" | "lastname" => UserFieldType::Attribute(
AttributeName::from("last_name"),
AttributeType::String,
false,
),
"avatar" | "jpegphoto" => UserFieldType::Attribute(
AttributeName::from("avatar"),
AttributeType::JpegPhoto,
false,
),
"creationdate" | "createtimestamp" | "modifytimestamp" | "creation_date" => {
UserFieldType::PrimaryField(UserColumn::CreationDate)
}
"entryuuid" | "uuid" => UserFieldType::PrimaryField(UserColumn::Uuid),
_ => UserFieldType::NoMatch,
_ => schema
.get_schema()
.user_attributes
.get_attribute_type(field)
.map(|(t, is_list)| UserFieldType::Attribute(field.clone(), t, is_list))
.unwrap_or(UserFieldType::NoMatch),
}
}
pub fn map_group_field(field: &str) -> Option<&'static str> {
assert!(field == field.to_ascii_lowercase());
Some(match field {
"cn" | "displayname" | "uid" | "display_name" => "display_name",
"creationdate" | "createtimestamp" | "modifytimestamp" | "creation_date" => "creation_date",
"entryuuid" | "uuid" => "uuid",
_ => return None,
})
pub enum GroupFieldType {
NoMatch,
DisplayName,
CreationDate,
ObjectClass,
Dn,
// Like Dn, but returned as part of the attributes.
EntryDn,
Member,
Uuid,
Attribute(AttributeName, AttributeType, bool),
}
pub fn map_group_field(field: &AttributeName, schema: &PublicSchema) -> GroupFieldType {
match field.as_str() {
"dn" | "distinguishedname" => GroupFieldType::Dn,
"entrydn" => GroupFieldType::EntryDn,
"objectclass" => GroupFieldType::ObjectClass,
"cn" | "displayname" | "uid" | "display_name" | "id" => GroupFieldType::DisplayName,
"creationdate" | "createtimestamp" | "modifytimestamp" | "creation_date" => {
GroupFieldType::CreationDate
}
"member" | "uniquemember" => GroupFieldType::Member,
"entryuuid" | "uuid" => GroupFieldType::Uuid,
_ => schema
.get_schema()
.group_attributes
.get_attribute_type(field)
.map(|(t, is_list)| GroupFieldType::Attribute(field.clone(), t, is_list))
.unwrap_or(GroupFieldType::NoMatch),
}
}
pub struct LdapInfo {
pub base_dn: Vec<(String, String)>,
pub base_dn_str: String,
pub ignored_user_attributes: Vec<String>,
pub ignored_group_attributes: Vec<String>,
pub ignored_user_attributes: Vec<AttributeName>,
pub ignored_group_attributes: Vec<AttributeName>,
}
pub fn get_custom_attribute(
pub fn get_custom_attribute<Extractor: SchemaAttributeExtractor>(
attributes: &[AttributeValue],
attribute_name: &str,
schema: &Schema,
attribute_name: &AttributeName,
schema: &PublicSchema,
) -> Option<Vec<Vec<u8>>> {
let convert_date = |date| {
chrono::Utc
@@ -206,13 +256,12 @@ pub fn get_custom_attribute(
.to_rfc3339()
.into_bytes()
};
schema
.user_attributes
Extractor::get_attributes(schema)
.get_attribute_type(attribute_name)
.and_then(|attribute_type| {
attributes
.iter()
.find(|a| a.name == attribute_name)
.find(|a| &a.name == attribute_name)
.map(|attribute| match attribute_type {
(AttributeType::String, false) => {
vec![attribute.value.unwrap::<String>().into_bytes()]

View File

@@ -1,8 +1,10 @@
pub mod deserialize;
pub mod error;
pub mod handler;
pub mod ldap;
pub mod model;
pub mod opaque_handler;
pub mod schema;
pub mod sql_backend_handler;
pub mod sql_group_backend_handler;
pub mod sql_migrations;

View File

@@ -1,7 +1,10 @@
use sea_orm::entity::prelude::*;
use serde::{Deserialize, Serialize};
use crate::domain::{handler::AttributeSchema, types::AttributeType};
use crate::domain::{
handler::AttributeSchema,
types::{AttributeName, AttributeType},
};
#[derive(Clone, Debug, PartialEq, DeriveEntityModel, Eq, Serialize, Deserialize)]
#[sea_orm(table_name = "group_attribute_schema")]
@@ -11,7 +14,7 @@ pub struct Model {
auto_increment = false,
column_name = "group_attribute_schema_name"
)]
pub attribute_name: String,
pub attribute_name: AttributeName,
#[sea_orm(column_name = "group_attribute_schema_type")]
pub attribute_type: AttributeType,
#[sea_orm(column_name = "group_attribute_schema_is_list")]

View File

@@ -1,7 +1,7 @@
use sea_orm::entity::prelude::*;
use serde::{Deserialize, Serialize};
use crate::domain::types::{AttributeValue, GroupId, Serialized};
use crate::domain::types::{AttributeName, AttributeValue, GroupId, Serialized};
#[derive(Clone, Debug, PartialEq, DeriveEntityModel, Eq, Serialize, Deserialize)]
#[sea_orm(table_name = "group_attributes")]
@@ -17,7 +17,7 @@ pub struct Model {
auto_increment = false,
column_name = "group_attribute_name"
)]
pub attribute_name: String,
pub attribute_name: AttributeName,
#[sea_orm(column_name = "group_attribute_value")]
pub value: Serialized,
}

View File

@@ -3,14 +3,15 @@
use sea_orm::entity::prelude::*;
use serde::{Deserialize, Serialize};
use crate::domain::types::{GroupId, Uuid};
use crate::domain::types::{GroupId, GroupName, Uuid};
#[derive(Clone, Debug, PartialEq, DeriveEntityModel, Eq, Serialize, Deserialize)]
#[sea_orm(table_name = "groups")]
pub struct Model {
#[sea_orm(primary_key, auto_increment = false)]
pub group_id: GroupId,
pub display_name: String,
pub display_name: GroupName,
pub lowercase_display_name: String,
pub creation_date: chrono::NaiveDateTime,
pub uuid: Uuid,
}

View File

@@ -1,7 +1,10 @@
use sea_orm::entity::prelude::*;
use serde::{Deserialize, Serialize};
use crate::domain::{handler::AttributeSchema, types::AttributeType};
use crate::domain::{
handler::AttributeSchema,
types::{AttributeName, AttributeType},
};
#[derive(Clone, Debug, PartialEq, DeriveEntityModel, Eq, Serialize, Deserialize)]
#[sea_orm(table_name = "user_attribute_schema")]
@@ -11,7 +14,7 @@ pub struct Model {
auto_increment = false,
column_name = "user_attribute_schema_name"
)]
pub attribute_name: String,
pub attribute_name: AttributeName,
#[sea_orm(column_name = "user_attribute_schema_type")]
pub attribute_type: AttributeType,
#[sea_orm(column_name = "user_attribute_schema_is_list")]

View File

@@ -1,7 +1,7 @@
use sea_orm::entity::prelude::*;
use serde::{Deserialize, Serialize};
use crate::domain::types::{AttributeValue, Serialized, UserId};
use crate::domain::types::{AttributeName, AttributeValue, Serialized, UserId};
#[derive(Clone, Debug, PartialEq, DeriveEntityModel, Eq, Serialize, Deserialize)]
#[sea_orm(table_name = "user_attributes")]
@@ -17,7 +17,7 @@ pub struct Model {
auto_increment = false,
column_name = "user_attribute_name"
)]
pub attribute_name: String,
pub attribute_name: AttributeName,
#[sea_orm(column_name = "user_attribute_value")]
pub value: Serialized,
}

View File

@@ -3,7 +3,7 @@
use sea_orm::{entity::prelude::*, sea_query::BlobSize};
use serde::{Deserialize, Serialize};
use crate::domain::types::{UserId, Uuid};
use crate::domain::types::{Email, UserId, Uuid};
#[derive(Copy, Clone, Default, Debug, DeriveEntity)]
pub struct Entity;
@@ -13,7 +13,8 @@ pub struct Entity;
pub struct Model {
#[sea_orm(primary_key, auto_increment = false)]
pub user_id: UserId,
pub email: String,
pub email: Email,
pub lowercase_email: String,
pub display_name: Option<String>,
pub creation_date: chrono::NaiveDateTime,
pub password_hash: Option<Vec<u8>>,
@@ -32,6 +33,7 @@ impl EntityName for Entity {
pub enum Column {
UserId,
Email,
LowercaseEmail,
DisplayName,
CreationDate,
PasswordHash,
@@ -47,6 +49,7 @@ impl ColumnTrait for Column {
match self {
Column::UserId => ColumnType::String(Some(255)),
Column::Email => ColumnType::String(Some(255)),
Column::LowercaseEmail => ColumnType::String(Some(255)),
Column::DisplayName => ColumnType::String(Some(255)),
Column::CreationDate => ColumnType::DateTime,
Column::PasswordHash => ColumnType::Binary(BlobSize::Medium),

124
server/src/domain/schema.rs Normal file
View File

@@ -0,0 +1,124 @@
use crate::domain::{
handler::{AttributeList, AttributeSchema, Schema},
types::AttributeType,
};
use serde::{Deserialize, Serialize};
#[derive(PartialEq, Eq, Debug, Serialize, Deserialize)]
pub struct PublicSchema(Schema);
impl PublicSchema {
pub fn get_schema(&self) -> &Schema {
&self.0
}
}
pub trait SchemaAttributeExtractor: std::marker::Send {
fn get_attributes(schema: &PublicSchema) -> &AttributeList;
}
pub struct SchemaUserAttributeExtractor;
impl SchemaAttributeExtractor for SchemaUserAttributeExtractor {
fn get_attributes(schema: &PublicSchema) -> &AttributeList {
&schema.get_schema().user_attributes
}
}
pub struct SchemaGroupAttributeExtractor;
impl SchemaAttributeExtractor for SchemaGroupAttributeExtractor {
fn get_attributes(schema: &PublicSchema) -> &AttributeList {
&schema.get_schema().group_attributes
}
}
impl From<Schema> for PublicSchema {
fn from(mut schema: Schema) -> Self {
schema.user_attributes.attributes.extend_from_slice(&[
AttributeSchema {
name: "user_id".into(),
attribute_type: AttributeType::String,
is_list: false,
is_visible: true,
is_editable: false,
is_hardcoded: true,
},
AttributeSchema {
name: "creation_date".into(),
attribute_type: AttributeType::DateTime,
is_list: false,
is_visible: true,
is_editable: false,
is_hardcoded: true,
},
AttributeSchema {
name: "mail".into(),
attribute_type: AttributeType::String,
is_list: false,
is_visible: true,
is_editable: true,
is_hardcoded: true,
},
AttributeSchema {
name: "uuid".into(),
attribute_type: AttributeType::String,
is_list: false,
is_visible: true,
is_editable: false,
is_hardcoded: true,
},
AttributeSchema {
name: "display_name".into(),
attribute_type: AttributeType::String,
is_list: false,
is_visible: true,
is_editable: true,
is_hardcoded: true,
},
]);
schema
.user_attributes
.attributes
.sort_by(|a, b| a.name.cmp(&b.name));
schema.group_attributes.attributes.extend_from_slice(&[
AttributeSchema {
name: "group_id".into(),
attribute_type: AttributeType::Integer,
is_list: false,
is_visible: true,
is_editable: false,
is_hardcoded: true,
},
AttributeSchema {
name: "creation_date".into(),
attribute_type: AttributeType::DateTime,
is_list: false,
is_visible: true,
is_editable: false,
is_hardcoded: true,
},
AttributeSchema {
name: "uuid".into(),
attribute_type: AttributeType::String,
is_list: false,
is_visible: true,
is_editable: false,
is_hardcoded: true,
},
AttributeSchema {
name: "display_name".into(),
attribute_type: AttributeType::String,
is_list: false,
is_visible: true,
is_editable: true,
is_hardcoded: true,
},
]);
schema
.group_attributes
.attributes
.sort_by(|a, b| a.name.cmp(&b.name));
PublicSchema(schema)
}
}

View File

@@ -23,7 +23,7 @@ pub mod tests {
use crate::{
domain::{
handler::{
CreateUserRequest, GroupBackendHandler, UserBackendHandler,
CreateGroupRequest, CreateUserRequest, GroupBackendHandler, UserBackendHandler,
UserListerBackendHandler, UserRequestFilter,
},
sql_tables::init_table,
@@ -63,7 +63,7 @@ pub mod tests {
opaque::client::registration::start_registration(pass.as_bytes(), &mut rng).unwrap();
let response = handler
.registration_start(registration::ClientRegistrationStartRequest {
username: name.to_string(),
username: name.into(),
registration_start_request: client_registration_start.message,
})
.await
@@ -87,7 +87,7 @@ pub mod tests {
handler
.create_user(CreateUserRequest {
user_id: UserId::new(name),
email: format!("{}@bob.bob", name),
email: format!("{}@bob.bob", name).into(),
display_name: Some("display ".to_string() + name),
first_name: Some("first ".to_string() + name),
last_name: Some("last ".to_string() + name),
@@ -98,7 +98,13 @@ pub mod tests {
}
pub async fn insert_group(handler: &SqlBackendHandler, name: &str) -> GroupId {
handler.create_group(name).await.unwrap()
handler
.create_group(CreateGroupRequest {
display_name: name.into(),
..Default::default()
})
.await
.unwrap()
}
pub async fn insert_membership(handler: &SqlBackendHandler, group_id: GroupId, user_id: &str) {

View File

@@ -1,20 +1,34 @@
use crate::domain::{
error::{DomainError, Result},
handler::{
GroupBackendHandler, GroupListerBackendHandler, GroupRequestFilter, UpdateGroupRequest,
CreateGroupRequest, GroupBackendHandler, GroupListerBackendHandler, GroupRequestFilter,
UpdateGroupRequest,
},
model::{self, GroupColumn, MembershipColumn},
sql_backend_handler::SqlBackendHandler,
types::{AttributeValue, Group, GroupDetails, GroupId, Uuid},
types::{AttributeName, AttributeValue, Group, GroupDetails, GroupId, Serialized, Uuid},
};
use async_trait::async_trait;
use sea_orm::{
sea_query::{Alias, Cond, Expr, Func, IntoCondition, SimpleExpr},
ActiveModelTrait, ActiveValue, ColumnTrait, EntityTrait, QueryFilter, QueryOrder, QuerySelect,
QueryTrait,
sea_query::{Alias, Cond, Expr, Func, IntoCondition, OnConflict, SimpleExpr},
ActiveModelTrait, ColumnTrait, DatabaseTransaction, EntityTrait, QueryFilter, QueryOrder,
QuerySelect, QueryTrait, Set, TransactionTrait,
};
use tracing::instrument;
fn attribute_condition(name: AttributeName, value: Serialized) -> Cond {
Expr::in_subquery(
Expr::col(GroupColumn::GroupId.as_column_ref()),
model::GroupAttributes::find()
.select_only()
.column(model::GroupAttributesColumn::GroupId)
.filter(model::GroupAttributesColumn::AttributeName.eq(name))
.filter(model::GroupAttributesColumn::Value.eq(value))
.into_query(),
)
.into_condition()
}
fn get_group_filter_expr(filter: GroupRequestFilter) -> Cond {
use GroupRequestFilter::*;
let group_table = Alias::new("groups");
@@ -36,7 +50,9 @@ fn get_group_filter_expr(filter: GroupRequestFilter) -> Cond {
}
}
Not(f) => get_group_filter_expr(*f).not(),
DisplayName(name) => GroupColumn::DisplayName.eq(name).into_condition(),
DisplayName(name) => GroupColumn::LowercaseDisplayName
.eq(name.as_str().to_lowercase())
.into_condition(),
GroupId(id) => GroupColumn::GroupId.eq(id.0).into_condition(),
Uuid(uuid) => GroupColumn::Uuid.eq(uuid.to_string()).into_condition(),
// WHERE (group_id in (SELECT group_id FROM memberships WHERE user_id = user))
@@ -55,6 +71,7 @@ fn get_group_filter_expr(filter: GroupRequestFilter) -> Cond {
))))
.like(filter.to_sql_filter())
.into_condition(),
AttributeEquality(name, value) => attribute_condition(name, value),
}
}
@@ -139,29 +156,63 @@ impl GroupBackendHandler for SqlBackendHandler {
#[instrument(skip(self), level = "debug", err, fields(group_id = ?request.group_id))]
async fn update_group(&self, request: UpdateGroupRequest) -> Result<()> {
let update_group = model::groups::ActiveModel {
group_id: ActiveValue::Set(request.group_id),
display_name: request
.display_name
.map(ActiveValue::Set)
.unwrap_or_default(),
..Default::default()
};
update_group.update(&self.sql_pool).await?;
Ok(())
Ok(self
.sql_pool
.transaction::<_, (), DomainError>(|transaction| {
Box::pin(
async move { Self::update_group_with_transaction(request, transaction).await },
)
})
.await?)
}
#[instrument(skip(self), level = "debug", ret, err)]
async fn create_group(&self, group_name: &str) -> Result<GroupId> {
async fn create_group(&self, request: CreateGroupRequest) -> Result<GroupId> {
let now = chrono::Utc::now().naive_utc();
let uuid = Uuid::from_name_and_date(group_name, &now);
let uuid = Uuid::from_name_and_date(request.display_name.as_str(), &now);
let lower_display_name = request.display_name.as_str().to_lowercase();
let new_group = model::groups::ActiveModel {
display_name: ActiveValue::Set(group_name.to_owned()),
creation_date: ActiveValue::Set(now),
uuid: ActiveValue::Set(uuid),
display_name: Set(request.display_name),
lowercase_display_name: Set(lower_display_name),
creation_date: Set(now),
uuid: Set(uuid),
..Default::default()
};
Ok(new_group.insert(&self.sql_pool).await?.group_id)
Ok(self
.sql_pool
.transaction::<_, GroupId, DomainError>(|transaction| {
Box::pin(async move {
let schema = Self::get_schema_with_transaction(transaction).await?;
let group_id = new_group.insert(transaction).await?.group_id;
let mut new_group_attributes = Vec::new();
for attribute in request.attributes {
if schema
.group_attributes
.get_attribute_type(&attribute.name)
.is_some()
{
new_group_attributes.push(model::group_attributes::ActiveModel {
group_id: Set(group_id),
attribute_name: Set(attribute.name),
value: Set(attribute.value),
});
} else {
return Err(DomainError::InternalError(format!(
"Attribute name {} doesn't exist in the group schema,
yet was attempted to be inserted in the database",
&attribute.name
)));
}
}
if !new_group_attributes.is_empty() {
model::GroupAttributes::insert_many(new_group_attributes)
.exec(transaction)
.await?;
}
Ok(group_id)
})
})
.await?)
}
#[instrument(skip(self), level = "debug", err)]
@@ -179,10 +230,89 @@ impl GroupBackendHandler for SqlBackendHandler {
}
}
impl SqlBackendHandler {
async fn update_group_with_transaction(
request: UpdateGroupRequest,
transaction: &DatabaseTransaction,
) -> Result<()> {
let lower_display_name = request
.display_name
.as_ref()
.map(|s| s.as_str().to_lowercase());
let update_group = model::groups::ActiveModel {
group_id: Set(request.group_id),
display_name: request.display_name.map(Set).unwrap_or_default(),
lowercase_display_name: lower_display_name.map(Set).unwrap_or_default(),
..Default::default()
};
update_group.update(transaction).await?;
let mut update_group_attributes = Vec::new();
let mut remove_group_attributes = Vec::new();
let schema = Self::get_schema_with_transaction(transaction).await?;
for attribute in request.insert_attributes {
if schema
.group_attributes
.get_attribute_type(&attribute.name)
.is_some()
{
update_group_attributes.push(model::group_attributes::ActiveModel {
group_id: Set(request.group_id),
attribute_name: Set(attribute.name.to_owned()),
value: Set(attribute.value),
});
} else {
return Err(DomainError::InternalError(format!(
"Group attribute name {} doesn't exist in the schema, yet was attempted to be inserted in the database",
&attribute.name
)));
}
}
for attribute in request.delete_attributes {
if schema
.group_attributes
.get_attribute_type(&attribute)
.is_some()
{
remove_group_attributes.push(attribute);
} else {
return Err(DomainError::InternalError(format!(
"Group attribute name {} doesn't exist in the schema, yet was attempted to be removed from the database",
attribute
)));
}
}
if !remove_group_attributes.is_empty() {
model::GroupAttributes::delete_many()
.filter(model::GroupAttributesColumn::GroupId.eq(request.group_id))
.filter(model::GroupAttributesColumn::AttributeName.is_in(remove_group_attributes))
.exec(transaction)
.await?;
}
if !update_group_attributes.is_empty() {
model::GroupAttributes::insert_many(update_group_attributes)
.on_conflict(
OnConflict::columns([
model::GroupAttributesColumn::GroupId,
model::GroupAttributesColumn::AttributeName,
])
.update_column(model::GroupAttributesColumn::Value)
.to_owned(),
)
.exec(transaction)
.await?;
}
Ok(())
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::domain::{handler::SubStringFilter, sql_backend_handler::tests::*, types::UserId};
use crate::domain::{
handler::{CreateAttributeRequest, SchemaBackendHandler, SubStringFilter},
sql_backend_handler::tests::*,
types::{AttributeType, GroupName, Serialized, UserId},
};
use pretty_assertions::assert_eq;
async fn get_group_ids(
@@ -201,7 +331,7 @@ mod tests {
async fn get_group_names(
handler: &SqlBackendHandler,
filters: Option<GroupRequestFilter>,
) -> Vec<String> {
) -> Vec<GroupName> {
handler
.list_groups(filters)
.await
@@ -217,9 +347,9 @@ mod tests {
assert_eq!(
get_group_names(&fixture.handler, None).await,
vec![
"Best Group".to_owned(),
"Empty Group".to_owned(),
"Worst Group".to_owned()
"Best Group".into(),
"Empty Group".into(),
"Worst Group".into()
]
);
}
@@ -231,12 +361,25 @@ mod tests {
get_group_names(
&fixture.handler,
Some(GroupRequestFilter::Or(vec![
GroupRequestFilter::DisplayName("Empty Group".to_owned()),
GroupRequestFilter::DisplayName("Empty Group".into()),
GroupRequestFilter::Member(UserId::new("bob")),
]))
)
.await,
vec!["Best Group".to_owned(), "Empty Group".to_owned()]
vec!["Best Group".into(), "Empty Group".into()]
);
}
#[tokio::test]
async fn test_list_groups_case_insensitive_filter() {
let fixture = TestFixture::new().await;
assert_eq!(
get_group_names(
&fixture.handler,
Some(GroupRequestFilter::DisplayName("eMpTy gRoup".into()),)
)
.await,
vec!["Empty Group".into()]
);
}
@@ -248,7 +391,7 @@ mod tests {
&fixture.handler,
Some(GroupRequestFilter::And(vec![
GroupRequestFilter::Not(Box::new(GroupRequestFilter::DisplayName(
"value".to_owned()
"value".into()
))),
GroupRequestFilter::GroupId(fixture.groups[0]),
]))
@@ -276,6 +419,46 @@ mod tests {
);
}
#[tokio::test]
async fn test_list_groups_other_filter() {
let fixture = TestFixture::new().await;
fixture
.handler
.add_group_attribute(CreateAttributeRequest {
name: "gid".into(),
attribute_type: AttributeType::Integer,
is_list: false,
is_visible: true,
is_editable: true,
})
.await
.unwrap();
fixture
.handler
.update_group(UpdateGroupRequest {
group_id: fixture.groups[0],
display_name: None,
delete_attributes: Vec::new(),
insert_attributes: vec![AttributeValue {
name: "gid".into(),
value: Serialized::from(&512),
}],
})
.await
.unwrap();
assert_eq!(
get_group_ids(
&fixture.handler,
Some(GroupRequestFilter::AttributeEquality(
AttributeName::from("gid"),
Serialized::from(&512),
)),
)
.await,
vec![fixture.groups[0]]
);
}
#[tokio::test]
async fn test_get_group_details() {
let fixture = TestFixture::new().await;
@@ -285,7 +468,7 @@ mod tests {
.await
.unwrap();
assert_eq!(details.group_id, fixture.groups[0]);
assert_eq!(details.display_name, "Best Group");
assert_eq!(details.display_name, "Best Group".into());
assert_eq!(
get_group_ids(
&fixture.handler,
@@ -303,7 +486,9 @@ mod tests {
.handler
.update_group(UpdateGroupRequest {
group_id: fixture.groups[0],
display_name: Some("Awesomest Group".to_owned()),
display_name: Some("Awesomest Group".into()),
delete_attributes: Vec::new(),
insert_attributes: Vec::new(),
})
.await
.unwrap();
@@ -312,7 +497,7 @@ mod tests {
.get_group_details(fixture.groups[0])
.await
.unwrap();
assert_eq!(details.display_name, "Awesomest Group");
assert_eq!(details.display_name, "Awesomest Group".into());
}
#[tokio::test]
@@ -332,4 +517,93 @@ mod tests {
vec![fixture.groups[2], fixture.groups[1]]
);
}
#[tokio::test]
async fn test_create_group() {
let fixture = TestFixture::new().await;
assert_eq!(
get_group_ids(&fixture.handler, None).await,
vec![fixture.groups[0], fixture.groups[2], fixture.groups[1]]
);
fixture
.handler
.add_group_attribute(CreateAttributeRequest {
name: "new_attribute".into(),
attribute_type: AttributeType::String,
is_list: false,
is_visible: true,
is_editable: true,
})
.await
.unwrap();
let new_group_id = fixture
.handler
.create_group(CreateGroupRequest {
display_name: "New Group".into(),
attributes: vec![AttributeValue {
name: "new_attribute".into(),
value: Serialized::from("value"),
}],
})
.await
.unwrap();
let group_details = fixture
.handler
.get_group_details(new_group_id)
.await
.unwrap();
assert_eq!(group_details.display_name, "New Group".into());
assert_eq!(
group_details.attributes,
vec![AttributeValue {
name: "new_attribute".into(),
value: Serialized::from("value"),
}]
);
}
#[tokio::test]
async fn test_set_group_attributes() {
let fixture = TestFixture::new().await;
fixture
.handler
.add_group_attribute(CreateAttributeRequest {
name: "new_attribute".into(),
attribute_type: AttributeType::Integer,
is_list: false,
is_visible: true,
is_editable: true,
})
.await
.unwrap();
let group_id = fixture.groups[0];
let attributes = vec![AttributeValue {
name: "new_attribute".into(),
value: Serialized::from(&42i64),
}];
fixture
.handler
.update_group(UpdateGroupRequest {
group_id,
display_name: None,
delete_attributes: Vec::new(),
insert_attributes: attributes.clone(),
})
.await
.unwrap();
let details = fixture.handler.get_group_details(group_id).await.unwrap();
assert_eq!(details.attributes, attributes);
fixture
.handler
.update_group(UpdateGroupRequest {
group_id,
display_name: None,
delete_attributes: vec!["new_attribute".into()],
insert_attributes: Vec::new(),
})
.await
.unwrap();
let details = fixture.handler.get_group_details(group_id).await.unwrap();
assert_eq!(details.attributes, Vec::new());
}
}

View File

@@ -5,7 +5,8 @@ use crate::domain::{
use itertools::Itertools;
use sea_orm::{
sea_query::{
self, all, ColumnDef, Expr, ForeignKey, ForeignKeyAction, Func, Index, Query, Table, Value,
self, all, Alias, BinOper, BlobSize::Blob, ColumnDef, Expr, ForeignKey, ForeignKeyAction,
Func, Index, Query, SimpleExpr, Table, Value,
},
ConnectionTrait, DatabaseTransaction, DbErr, DeriveIden, FromQueryResult, Iden, Order,
Statement, TransactionTrait,
@@ -18,6 +19,7 @@ pub enum Users {
Table,
UserId,
Email,
LowercaseEmail,
DisplayName,
FirstName,
LastName,
@@ -34,6 +36,7 @@ pub enum Groups {
Table,
GroupId,
DisplayName,
LowercaseDisplayName,
CreationDate,
Uuid,
}
@@ -45,6 +48,7 @@ pub enum Memberships {
GroupId,
}
#[allow(clippy::enum_variant_names)] // The table names are generated from the enum.
#[derive(DeriveIden, PartialEq, Eq, Debug, Serialize, Deserialize, Clone, Copy)]
pub enum UserAttributeSchema {
Table,
@@ -64,6 +68,7 @@ pub enum UserAttributes {
UserAttributeValue,
}
#[allow(clippy::enum_variant_names)] // The table names are generated from the enum.
#[derive(DeriveIden, PartialEq, Eq, Debug, Serialize, Deserialize, Clone, Copy)]
pub enum GroupAttributeSchema {
Table,
@@ -89,6 +94,8 @@ pub enum Metadata {
Table,
// Which version of the schema we're at.
Version,
PrivateKeyHash,
PrivateKeyLocation,
}
#[derive(FromQueryResult, PartialEq, Eq, Debug)]
@@ -875,6 +882,155 @@ async fn migrate_to_v5(transaction: DatabaseTransaction) -> Result<DatabaseTrans
Ok(transaction)
}
async fn migrate_to_v6(transaction: DatabaseTransaction) -> Result<DatabaseTransaction, DbErr> {
let builder = transaction.get_database_backend();
transaction
.execute(
builder.build(
Table::alter().table(Groups::Table).add_column(
ColumnDef::new(Groups::LowercaseDisplayName)
.string_len(255)
.not_null()
.default("UNSET"),
),
),
)
.await?;
transaction
.execute(
builder.build(
Table::alter().table(Users::Table).add_column(
ColumnDef::new(Users::LowercaseEmail)
.string_len(255)
.not_null()
.default("UNSET"),
),
),
)
.await?;
transaction
.execute(builder.build(Query::update().table(Groups::Table).value(
Groups::LowercaseDisplayName,
Func::lower(Expr::col(Groups::DisplayName)),
)))
.await?;
transaction
.execute(
builder.build(
Query::update()
.table(Users::Table)
.value(Users::LowercaseEmail, Func::lower(Expr::col(Users::Email))),
),
)
.await?;
Ok(transaction)
}
async fn migrate_to_v7(transaction: DatabaseTransaction) -> Result<DatabaseTransaction, DbErr> {
let builder = transaction.get_database_backend();
transaction
.execute(
builder.build(
Table::alter()
.table(Metadata::Table)
.add_column(ColumnDef::new(Metadata::PrivateKeyHash).blob(Blob(Some(32)))),
),
)
.await?;
transaction
.execute(
builder.build(
Table::alter()
.table(Metadata::Table)
.add_column(ColumnDef::new(Metadata::PrivateKeyLocation).string_len(255)),
),
)
.await?;
Ok(transaction)
}
async fn migrate_to_v8(transaction: DatabaseTransaction) -> Result<DatabaseTransaction, DbErr> {
let builder = transaction.get_database_backend();
// Remove duplicate memberships.
#[derive(FromQueryResult)]
#[allow(dead_code)]
struct MembershipInfo {
user_id: UserId,
group_id: GroupId,
cnt: i64,
}
let mut delete_queries = MembershipInfo::find_by_statement(
builder.build(
Query::select()
.from(Memberships::Table)
.columns([Memberships::UserId, Memberships::GroupId])
.expr_as(
Expr::count(Expr::col((Memberships::Table, Memberships::UserId))),
Alias::new("cnt"),
)
.group_by_columns([Memberships::UserId, Memberships::GroupId])
.cond_having(all![SimpleExpr::Binary(
Box::new(Expr::col((Memberships::Table, Memberships::UserId)).count()),
BinOper::GreaterThan,
Box::new(SimpleExpr::Value(1.into()))
)]),
),
)
.all(&transaction)
.await?
.into_iter()
.map(
|MembershipInfo {
user_id,
group_id,
cnt,
}| {
builder
.build(
Query::delete()
.from_table(Memberships::Table)
.cond_where(all![
Expr::col(Memberships::UserId).eq(user_id),
Expr::col(Memberships::GroupId).eq(group_id)
])
.limit(cnt as u64 - 1),
)
.to_owned()
},
)
.peekable();
if delete_queries.peek().is_some() {
match transaction.get_database_backend() {
sea_orm::DatabaseBackend::Sqlite => {
return Err(DbErr::Migration(format!(
"The Sqlite driver does not support LIMIT in DELETE. Run these queries manually:\n{}" , delete_queries.map(|s| s.to_string()).join("\n"))));
}
sea_orm::DatabaseBackend::MySql | sea_orm::DatabaseBackend::Postgres => {
for query in delete_queries {
transaction.execute(query).await?;
}
}
}
}
transaction
.execute(
builder.build(
Index::create()
.if_not_exists()
.name("unique-memberships")
.table(Memberships::Table)
.col(Memberships::UserId)
.col(Memberships::GroupId)
.unique(),
),
)
.await?;
Ok(transaction)
}
// This is needed to make an array of async functions.
macro_rules! to_sync {
($l:ident) => {
@@ -900,6 +1056,9 @@ pub async fn migrate_from_version(
to_sync!(migrate_to_v3),
to_sync!(migrate_to_v4),
to_sync!(migrate_to_v5),
to_sync!(migrate_to_v6),
to_sync!(migrate_to_v7),
to_sync!(migrate_to_v8),
];
assert_eq!(migrations.len(), (LAST_SCHEMA_VERSION.0 - 1) as usize);
for migration in 2..=last_version.0 {

View File

@@ -33,7 +33,7 @@ fn passwords_match(
server_setup,
Some(password_file),
client_login_start_result.message,
username.as_str(),
username,
)?;
client::login::finish_login(
client_login_start_result.state,
@@ -100,15 +100,13 @@ impl OpaqueHandler for SqlOpaqueHandler {
&self,
request: login::ClientLoginStartRequest,
) -> Result<login::ServerLoginStartResponse> {
let user_id = request.username;
let maybe_password_file = self
.get_password_file_for_user(UserId::new(&request.username))
.get_password_file_for_user(user_id.clone())
.await?
.map(|bytes| {
opaque::server::ServerRegistration::deserialize(&bytes).map_err(|_| {
DomainError::InternalError(format!(
"Corrupted password file for {}",
&request.username
))
DomainError::InternalError(format!("Corrupted password file for {}", &user_id))
})
})
.transpose()?;
@@ -120,11 +118,11 @@ impl OpaqueHandler for SqlOpaqueHandler {
self.config.get_server_setup(),
maybe_password_file,
request.login_start_request,
&request.username,
&user_id,
)?;
let secret_key = self.get_orion_secret_key()?;
let server_data = login::ServerData {
username: request.username,
username: user_id,
server_login: start_response.state,
};
let encrypted_state = orion::aead::seal(&secret_key, &bincode::serialize(&server_data)?)?;
@@ -151,7 +149,7 @@ impl OpaqueHandler for SqlOpaqueHandler {
opaque::server::login::finish_login(server_login, request.credential_finalization)?
.session_key;
Ok(UserId::new(&username))
Ok(username)
}
#[instrument(skip_all, level = "debug", err)]
@@ -191,7 +189,7 @@ impl OpaqueHandler for SqlOpaqueHandler {
opaque::server::registration::get_password_file(request.registration_upload);
// Set the user password to the new password.
let user_update = model::users::ActiveModel {
user_id: ActiveValue::Set(UserId::new(&username)),
user_id: ActiveValue::Set(username),
password_hash: ActiveValue::Set(Some(password_file.serialize())),
..Default::default()
};
@@ -204,7 +202,7 @@ impl OpaqueHandler for SqlOpaqueHandler {
#[instrument(skip_all, level = "debug", err, fields(username = %username.as_str()))]
pub(crate) async fn register_password(
opaque_handler: &SqlOpaqueHandler,
username: &UserId,
username: UserId,
password: &SecUtf8,
) -> Result<()> {
let mut rng = rand::rngs::OsRng;
@@ -213,7 +211,7 @@ pub(crate) async fn register_password(
opaque::client::registration::start_registration(password.unsecure().as_bytes(), &mut rng)?;
let start_response = opaque_handler
.registration_start(ClientRegistrationStartRequest {
username: username.to_string(),
username,
registration_start_request: registration_start.message,
})
.await?;
@@ -245,7 +243,7 @@ mod tests {
let login_start = opaque::client::login::start_login(password, &mut rng)?;
let start_response = opaque_handler
.login_start(ClientLoginStartRequest {
username: username.to_string(),
username: UserId::new(username),
login_start_request: login_start.message,
})
.await?;
@@ -276,7 +274,7 @@ mod tests {
.unwrap_err();
register_password(
&opaque_handler,
&UserId::new("bob"),
UserId::new("bob"),
&secstr::SecUtf8::from("bob00"),
)
.await?;

View File

@@ -1,43 +1,105 @@
use crate::domain::{
error::Result,
handler::{AttributeSchema, Schema, SchemaBackendHandler},
error::{DomainError, Result},
handler::{
AttributeList, AttributeSchema, CreateAttributeRequest, ReadSchemaBackendHandler, Schema,
SchemaBackendHandler,
},
model,
sql_backend_handler::SqlBackendHandler,
types::AttributeName,
};
use async_trait::async_trait;
use sea_orm::{EntityTrait, QueryOrder};
use sea_orm::{
ActiveModelTrait, DatabaseTransaction, EntityTrait, QueryOrder, Set, TransactionTrait,
};
use super::handler::AttributeList;
#[async_trait]
impl ReadSchemaBackendHandler for SqlBackendHandler {
async fn get_schema(&self) -> Result<Schema> {
Ok(self
.sql_pool
.transaction::<_, Schema, DomainError>(|transaction| {
Box::pin(async move { Self::get_schema_with_transaction(transaction).await })
})
.await?)
}
}
#[async_trait]
impl SchemaBackendHandler for SqlBackendHandler {
async fn get_schema(&self) -> Result<Schema> {
Ok(Schema {
user_attributes: AttributeList {
attributes: self.get_user_attributes().await?,
},
group_attributes: AttributeList {
attributes: self.get_group_attributes().await?,
},
})
async fn add_user_attribute(&self, request: CreateAttributeRequest) -> Result<()> {
let new_attribute = model::user_attribute_schema::ActiveModel {
attribute_name: Set(request.name),
attribute_type: Set(request.attribute_type),
is_list: Set(request.is_list),
is_user_visible: Set(request.is_visible),
is_user_editable: Set(request.is_editable),
is_hardcoded: Set(false),
};
new_attribute.insert(&self.sql_pool).await?;
Ok(())
}
async fn add_group_attribute(&self, request: CreateAttributeRequest) -> Result<()> {
let new_attribute = model::group_attribute_schema::ActiveModel {
attribute_name: Set(request.name),
attribute_type: Set(request.attribute_type),
is_list: Set(request.is_list),
is_group_visible: Set(request.is_visible),
is_group_editable: Set(request.is_editable),
is_hardcoded: Set(false),
};
new_attribute.insert(&self.sql_pool).await?;
Ok(())
}
async fn delete_user_attribute(&self, name: &AttributeName) -> Result<()> {
model::UserAttributeSchema::delete_by_id(name.clone())
.exec(&self.sql_pool)
.await?;
Ok(())
}
async fn delete_group_attribute(&self, name: &AttributeName) -> Result<()> {
model::GroupAttributeSchema::delete_by_id(name.clone())
.exec(&self.sql_pool)
.await?;
Ok(())
}
}
impl SqlBackendHandler {
async fn get_user_attributes(&self) -> Result<Vec<AttributeSchema>> {
pub(crate) async fn get_schema_with_transaction(
transaction: &DatabaseTransaction,
) -> Result<Schema> {
Ok(Schema {
user_attributes: AttributeList {
attributes: Self::get_user_attributes(transaction).await?,
},
group_attributes: AttributeList {
attributes: Self::get_group_attributes(transaction).await?,
},
})
}
async fn get_user_attributes(
transaction: &DatabaseTransaction,
) -> Result<Vec<AttributeSchema>> {
Ok(model::UserAttributeSchema::find()
.order_by_asc(model::UserAttributeSchemaColumn::AttributeName)
.all(&self.sql_pool)
.all(transaction)
.await?
.into_iter()
.map(|m| m.into())
.collect())
}
async fn get_group_attributes(&self) -> Result<Vec<AttributeSchema>> {
async fn get_group_attributes(
transaction: &DatabaseTransaction,
) -> Result<Vec<AttributeSchema>> {
Ok(model::GroupAttributeSchema::find()
.order_by_asc(model::GroupAttributeSchemaColumn::AttributeName)
.all(&self.sql_pool)
.all(transaction)
.await?
.into_iter()
.map(|m| m.into())
@@ -62,7 +124,7 @@ mod tests {
user_attributes: AttributeList {
attributes: vec![
AttributeSchema {
name: "avatar".to_owned(),
name: "avatar".into(),
attribute_type: AttributeType::JpegPhoto,
is_list: false,
is_visible: true,
@@ -70,7 +132,7 @@ mod tests {
is_hardcoded: true,
},
AttributeSchema {
name: "first_name".to_owned(),
name: "first_name".into(),
attribute_type: AttributeType::String,
is_list: false,
is_visible: true,
@@ -78,7 +140,7 @@ mod tests {
is_hardcoded: true,
},
AttributeSchema {
name: "last_name".to_owned(),
name: "last_name".into(),
attribute_type: AttributeType::String,
is_list: false,
is_visible: true,
@@ -93,4 +155,96 @@ mod tests {
}
);
}
#[tokio::test]
async fn test_user_attribute_add_and_delete() {
let fixture = TestFixture::new().await;
let new_attribute = CreateAttributeRequest {
name: "new_attribute".into(),
attribute_type: AttributeType::Integer,
is_list: true,
is_visible: false,
is_editable: false,
};
fixture
.handler
.add_user_attribute(new_attribute)
.await
.unwrap();
let expected_value = AttributeSchema {
name: "new_attribute".into(),
attribute_type: AttributeType::Integer,
is_list: true,
is_visible: false,
is_editable: false,
is_hardcoded: false,
};
assert!(fixture
.handler
.get_schema()
.await
.unwrap()
.user_attributes
.attributes
.contains(&expected_value));
fixture
.handler
.delete_user_attribute(&"new_attribute".into())
.await
.unwrap();
assert!(!fixture
.handler
.get_schema()
.await
.unwrap()
.user_attributes
.attributes
.contains(&expected_value));
}
#[tokio::test]
async fn test_group_attribute_add_and_delete() {
let fixture = TestFixture::new().await;
let new_attribute = CreateAttributeRequest {
name: "NeW_aTTribute".into(),
attribute_type: AttributeType::JpegPhoto,
is_list: false,
is_visible: true,
is_editable: false,
};
fixture
.handler
.add_group_attribute(new_attribute)
.await
.unwrap();
let expected_value = AttributeSchema {
name: "new_attribute".into(),
attribute_type: AttributeType::JpegPhoto,
is_list: false,
is_visible: true,
is_editable: false,
is_hardcoded: false,
};
assert!(fixture
.handler
.get_schema()
.await
.unwrap()
.group_attributes
.attributes
.contains(&expected_value));
fixture
.handler
.delete_group_attribute(&"new_attriBUte".into())
.await
.unwrap();
assert!(!fixture
.handler
.get_schema()
.await
.unwrap()
.group_attributes
.attributes
.contains(&expected_value));
}
}

View File

@@ -1,12 +1,47 @@
use super::sql_migrations::{get_schema_version, migrate_from_version, upgrade_to_v1};
use sea_orm::{DeriveValueType, QueryResult, Value};
use crate::domain::sql_migrations::{
get_schema_version, migrate_from_version, upgrade_to_v1, Metadata,
};
use sea_orm::{
sea_query::Query, ConnectionTrait, DeriveValueType, Iden, QueryResult, TryGetable, Value,
};
use serde::{Deserialize, Serialize};
pub type DbConnection = sea_orm::DatabaseConnection;
#[derive(Copy, PartialEq, Eq, Debug, Clone, PartialOrd, Ord, DeriveValueType)]
pub struct SchemaVersion(pub i16);
pub const LAST_SCHEMA_VERSION: SchemaVersion = SchemaVersion(5);
pub const LAST_SCHEMA_VERSION: SchemaVersion = SchemaVersion(8);
#[derive(Copy, PartialEq, Eq, Debug, Clone, PartialOrd, Ord)]
pub struct PrivateKeyHash(pub [u8; 32]);
impl TryGetable for PrivateKeyHash {
fn try_get(res: &QueryResult, pre: &str, col: &str) -> Result<Self, sea_orm::TryGetError> {
let index = format!("{pre}{col}");
Self::try_get_by(res, index.as_str())
}
fn try_get_by_index(res: &QueryResult, index: usize) -> Result<Self, sea_orm::TryGetError> {
Self::try_get_by(res, index)
}
fn try_get_by<I: sea_orm::ColIdx>(
res: &QueryResult,
index: I,
) -> Result<Self, sea_orm::TryGetError> {
Ok(PrivateKeyHash(
std::convert::TryInto::<[u8; 32]>::try_into(res.try_get_by::<Vec<u8>, I>(index)?)
.unwrap(),
))
}
}
impl From<PrivateKeyHash> for Value {
fn from(val: PrivateKeyHash) -> Self {
Self::from(val.0.to_vec())
}
}
pub async fn init_table(pool: &DbConnection) -> anyhow::Result<()> {
let version = {
@@ -21,6 +56,71 @@ pub async fn init_table(pool: &DbConnection) -> anyhow::Result<()> {
Ok(())
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq)]
pub enum ConfigLocation {
ConfigFile(String),
EnvironmentVariable(String),
CommandLine,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq)]
pub enum PrivateKeyLocation {
KeySeed(ConfigLocation),
KeyFile(ConfigLocation, std::ffi::OsString),
Default,
#[cfg(test)]
Tests,
}
#[derive(Debug)]
pub struct PrivateKeyInfo {
pub private_key_hash: PrivateKeyHash,
pub private_key_location: PrivateKeyLocation,
}
pub async fn get_private_key_info(pool: &DbConnection) -> anyhow::Result<Option<PrivateKeyInfo>> {
let result = pool
.query_one(
pool.get_database_backend().build(
Query::select()
.column(Metadata::PrivateKeyHash)
.column(Metadata::PrivateKeyLocation)
.from(Metadata::Table),
),
)
.await?;
let result = match result {
None => return Ok(None),
Some(r) => r,
};
if let Ok(hash) = result.try_get("", &Metadata::PrivateKeyHash.to_string()) {
Ok(Some(PrivateKeyInfo {
private_key_hash: hash,
private_key_location: serde_json::from_str(
&result.try_get::<String>("", &Metadata::PrivateKeyLocation.to_string())?,
)?,
}))
} else {
Ok(None)
}
}
pub async fn set_private_key_info(pool: &DbConnection, info: PrivateKeyInfo) -> anyhow::Result<()> {
pool.execute(
pool.get_database_backend().build(
Query::update()
.table(Metadata::Table)
.value(Metadata::PrivateKeyHash, Value::from(info.private_key_hash))
.value(
Metadata::PrivateKeyLocation,
Value::from(serde_json::to_string(&info.private_key_location).unwrap()),
),
),
)
.await?;
Ok(())
}
#[cfg(test)]
mod tests {
use crate::domain::{
@@ -51,8 +151,8 @@ mod tests {
sql_pool
.execute(raw_statement(
r#"INSERT INTO users
(user_id, email, display_name, creation_date, password_hash, uuid)
VALUES ("bôb", "böb@bob.bob", "Bob Bobbersön", "1970-01-01 00:00:00", "bob00", "abc")"#,
(user_id, email, lowercase_email, display_name, creation_date, password_hash, uuid)
VALUES ("bôb", "böb@bob.bob", "böb@bob.bob", "Bob Bobbersön", "1970-01-01 00:00:00", "bob00", "abc")"#,
))
.await
.unwrap();
@@ -373,6 +473,83 @@ mod tests {
);
}
#[tokio::test]
async fn test_migration_to_v6() {
crate::infra::logging::init_for_tests();
let sql_pool = get_in_memory_db().await;
upgrade_to_v1(&sql_pool).await.unwrap();
migrate_from_version(&sql_pool, SchemaVersion(1), SchemaVersion(5))
.await
.unwrap();
sql_pool
.execute(raw_statement(
r#"INSERT INTO users (user_id, email, display_name, creation_date, uuid)
VALUES ("bob", "BOb@bob.com", "", "1970-01-01 00:00:00", "a02eaf13-48a7-30f6-a3d4-040ff7c52b04")"#,
))
.await
.unwrap();
sql_pool
.execute(raw_statement(
r#"INSERT INTO groups (display_name, creation_date, uuid)
VALUES ("BestGroup", "1970-01-01 00:00:00", "986765a5-3f03-389e-b47b-536b2d6e1bec")"#,
))
.await
.unwrap();
migrate_from_version(&sql_pool, SchemaVersion(5), SchemaVersion(6))
.await
.unwrap();
assert_eq!(
sql_migrations::JustSchemaVersion::find_by_statement(raw_statement(
r#"SELECT version FROM metadata"#
))
.one(&sql_pool)
.await
.unwrap()
.unwrap(),
sql_migrations::JustSchemaVersion {
version: SchemaVersion(6)
}
);
#[derive(FromQueryResult, PartialEq, Eq, Debug)]
struct ShortUserDetails {
email: String,
lowercase_email: String,
}
let result = ShortUserDetails::find_by_statement(raw_statement(
r#"SELECT email, lowercase_email FROM users WHERE user_id = "bob""#,
))
.one(&sql_pool)
.await
.unwrap()
.unwrap();
assert_eq!(
result,
ShortUserDetails {
email: "BOb@bob.com".to_owned(),
lowercase_email: "bob@bob.com".to_owned(),
}
);
#[derive(FromQueryResult, PartialEq, Eq, Debug)]
struct ShortGroupDetails {
display_name: String,
lowercase_display_name: String,
}
let result = ShortGroupDetails::find_by_statement(raw_statement(
r#"SELECT display_name, lowercase_display_name FROM groups"#,
))
.one(&sql_pool)
.await
.unwrap()
.unwrap();
assert_eq!(
result,
ShortGroupDetails {
display_name: "BestGroup".to_owned(),
lowercase_display_name: "bestgroup".to_owned(),
}
);
}
#[tokio::test]
async fn test_too_high_version() {
let sql_pool = get_in_memory_db().await;

View File

@@ -6,27 +6,30 @@ use crate::domain::{
},
model::{self, GroupColumn, UserColumn},
sql_backend_handler::SqlBackendHandler,
types::{AttributeValue, GroupDetails, GroupId, Serialized, User, UserAndGroups, UserId, Uuid},
types::{
AttributeName, AttributeValue, GroupDetails, GroupId, Serialized, User, UserAndGroups,
UserId, Uuid,
},
};
use async_trait::async_trait;
use sea_orm::{
sea_query::{
query::OnConflict, Alias, Cond, Expr, Func, IntoColumnRef, IntoCondition, SimpleExpr,
},
ActiveModelTrait, ActiveValue, ColumnTrait, EntityTrait, IntoActiveValue, ModelTrait,
QueryFilter, QueryOrder, QuerySelect, QueryTrait, Set, TransactionTrait,
ActiveModelTrait, ActiveValue, ColumnTrait, DatabaseTransaction, EntityTrait, IntoActiveValue,
ModelTrait, QueryFilter, QueryOrder, QuerySelect, QueryTrait, Set, TransactionTrait,
};
use std::collections::HashSet;
use tracing::instrument;
fn attribute_condition(name: String, value: String) -> Cond {
fn attribute_condition(name: AttributeName, value: Serialized) -> Cond {
Expr::in_subquery(
Expr::col(UserColumn::UserId.as_column_ref()),
model::UserAttributes::find()
.select_only()
.column(model::UserAttributesColumn::UserId)
.filter(model::UserAttributesColumn::AttributeName.eq(name))
.filter(model::UserAttributesColumn::Value.eq(Serialized::from(&value)))
.filter(model::UserAttributesColumn::Value.eq(value))
.into_query(),
)
.into_condition()
@@ -53,14 +56,17 @@ fn get_user_filter_expr(filter: UserRequestFilter) -> Cond {
Or(fs) => get_repeated_filter(fs, Cond::any(), false),
Not(f) => get_user_filter_expr(*f).not(),
UserId(user_id) => ColumnTrait::eq(&UserColumn::UserId, user_id).into_condition(),
Equality(s1, s2) => {
if s1 == UserColumn::UserId {
Equality(column, value) => {
if column == UserColumn::UserId {
panic!("User id should be wrapped")
} else if column == UserColumn::Email {
ColumnTrait::eq(&UserColumn::LowercaseEmail, value.as_str().to_lowercase())
.into_condition()
} else {
ColumnTrait::eq(&s1, s2).into_condition()
ColumnTrait::eq(&column, value).into_condition()
}
}
AttributeEquality(s1, s2) => attribute_condition(s1, s2),
AttributeEquality(column, value) => attribute_condition(column, value),
MemberOf(group) => Expr::col((group_table, GroupColumn::DisplayName))
.eq(group)
.into_condition(),
@@ -154,6 +160,103 @@ impl UserListerBackendHandler for SqlBackendHandler {
}
}
impl SqlBackendHandler {
async fn update_user_with_transaction(
transaction: &DatabaseTransaction,
request: UpdateUserRequest,
) -> Result<()> {
let lower_email = request.email.as_ref().map(|s| s.as_str().to_lowercase());
let update_user = model::users::ActiveModel {
user_id: ActiveValue::Set(request.user_id.clone()),
email: request.email.map(ActiveValue::Set).unwrap_or_default(),
lowercase_email: lower_email.map(ActiveValue::Set).unwrap_or_default(),
display_name: to_value(&request.display_name),
..Default::default()
};
let to_serialized_value = |s: &Option<String>| match s.as_ref().map(|s| s.as_str()) {
None => None,
Some("") => Some(ActiveValue::NotSet),
Some(s) => Some(ActiveValue::Set(Serialized::from(s))),
};
let mut update_user_attributes = Vec::new();
let mut remove_user_attributes = Vec::new();
let mut process_serialized =
|value: ActiveValue<Serialized>, attribute_name: AttributeName| match &value {
ActiveValue::NotSet => {
remove_user_attributes.push(attribute_name);
}
ActiveValue::Set(_) => {
update_user_attributes.push(model::user_attributes::ActiveModel {
user_id: Set(request.user_id.clone()),
attribute_name: Set(attribute_name),
value,
})
}
_ => unreachable!(),
};
if let Some(value) = to_serialized_value(&request.first_name) {
process_serialized(value, "first_name".into());
}
if let Some(value) = to_serialized_value(&request.last_name) {
process_serialized(value, "last_name".into());
}
if let Some(avatar) = request.avatar {
process_serialized(avatar.into_active_value(), "avatar".into());
}
let schema = Self::get_schema_with_transaction(transaction).await?;
for attribute in request.insert_attributes {
if schema
.user_attributes
.get_attribute_type(&attribute.name)
.is_some()
{
process_serialized(ActiveValue::Set(attribute.value), attribute.name.clone());
} else {
return Err(DomainError::InternalError(format!(
"User attribute name {} doesn't exist in the schema, yet was attempted to be inserted in the database",
&attribute.name
)));
}
}
for attribute in request.delete_attributes {
if schema
.user_attributes
.get_attribute_type(&attribute)
.is_some()
{
remove_user_attributes.push(attribute);
} else {
return Err(DomainError::InternalError(format!(
"User attribute name {} doesn't exist in the schema, yet was attempted to be removed from the database",
attribute
)));
}
}
update_user.update(transaction).await?;
if !remove_user_attributes.is_empty() {
model::UserAttributes::delete_many()
.filter(model::UserAttributesColumn::UserId.eq(&request.user_id))
.filter(model::UserAttributesColumn::AttributeName.is_in(remove_user_attributes))
.exec(transaction)
.await?;
}
if !update_user_attributes.is_empty() {
model::UserAttributes::insert_many(update_user_attributes)
.on_conflict(
OnConflict::columns([
model::UserAttributesColumn::UserId,
model::UserAttributesColumn::AttributeName,
])
.update_column(model::UserAttributesColumn::Value)
.to_owned(),
)
.exec(transaction)
.await?;
}
Ok(())
}
}
#[async_trait]
impl UserBackendHandler for SqlBackendHandler {
#[instrument(skip_all, level = "debug", ret, fields(user_id = ?user_id.as_str()))]
@@ -192,9 +295,11 @@ impl UserBackendHandler for SqlBackendHandler {
async fn create_user(&self, request: CreateUserRequest) -> Result<()> {
let now = chrono::Utc::now().naive_utc();
let uuid = Uuid::from_name_and_date(request.user_id.as_str(), &now);
let lower_email = request.email.as_str().to_lowercase();
let new_user = model::users::ActiveModel {
user_id: Set(request.user_id.clone()),
email: Set(request.email),
lowercase_email: Set(lower_email),
display_name: to_value(&request.display_name),
creation_date: ActiveValue::Set(now),
uuid: ActiveValue::Set(uuid),
@@ -204,27 +309,47 @@ impl UserBackendHandler for SqlBackendHandler {
if let Some(first_name) = request.first_name {
new_user_attributes.push(model::user_attributes::ActiveModel {
user_id: Set(request.user_id.clone()),
attribute_name: Set("first_name".to_owned()),
attribute_name: Set("first_name".into()),
value: Set(Serialized::from(&first_name)),
});
}
if let Some(last_name) = request.last_name {
new_user_attributes.push(model::user_attributes::ActiveModel {
user_id: Set(request.user_id.clone()),
attribute_name: Set("last_name".to_owned()),
attribute_name: Set("last_name".into()),
value: Set(Serialized::from(&last_name)),
});
}
if let Some(avatar) = request.avatar {
new_user_attributes.push(model::user_attributes::ActiveModel {
user_id: Set(request.user_id),
attribute_name: Set("avatar".to_owned()),
user_id: Set(request.user_id.clone()),
attribute_name: Set("avatar".into()),
value: Set(Serialized::from(&avatar)),
});
}
self.sql_pool
.transaction::<_, (), DomainError>(|transaction| {
Box::pin(async move {
let schema = Self::get_schema_with_transaction(transaction).await?;
for attribute in request.attributes {
if schema
.user_attributes
.get_attribute_type(&attribute.name)
.is_some()
{
new_user_attributes.push(model::user_attributes::ActiveModel {
user_id: Set(request.user_id.clone()),
attribute_name: Set(attribute.name),
value: Set(attribute.value),
});
} else {
return Err(DomainError::InternalError(format!(
"Attribute name {} doesn't exist in the user schema,
yet was attempted to be inserted in the database",
&attribute.name
)));
}
}
new_user.insert(transaction).await?;
if !new_user_attributes.is_empty() {
model::UserAttributes::insert_many(new_user_attributes)
@@ -240,71 +365,11 @@ impl UserBackendHandler for SqlBackendHandler {
#[instrument(skip(self), level = "debug", err, fields(user_id = ?request.user_id.as_str()))]
async fn update_user(&self, request: UpdateUserRequest) -> Result<()> {
let update_user = model::users::ActiveModel {
user_id: ActiveValue::Set(request.user_id.clone()),
email: request.email.map(ActiveValue::Set).unwrap_or_default(),
display_name: to_value(&request.display_name),
..Default::default()
};
let mut update_user_attributes = Vec::new();
let mut remove_user_attributes = Vec::new();
let to_serialized_value = |s: &Option<String>| match s.as_ref().map(|s| s.as_str()) {
None => None,
Some("") => Some(ActiveValue::NotSet),
Some(s) => Some(ActiveValue::Set(Serialized::from(s))),
};
let mut process_serialized =
|value: ActiveValue<Serialized>, attribute_name: &str| match &value {
ActiveValue::NotSet => {
remove_user_attributes.push(attribute_name.to_owned());
}
ActiveValue::Set(_) => {
update_user_attributes.push(model::user_attributes::ActiveModel {
user_id: Set(request.user_id.clone()),
attribute_name: Set(attribute_name.to_owned()),
value,
})
}
_ => unreachable!(),
};
if let Some(value) = to_serialized_value(&request.first_name) {
process_serialized(value, "first_name");
}
if let Some(value) = to_serialized_value(&request.last_name) {
process_serialized(value, "last_name");
}
if let Some(avatar) = request.avatar {
process_serialized(avatar.into_active_value(), "avatar");
}
self.sql_pool
.transaction::<_, (), DomainError>(|transaction| {
Box::pin(async move {
update_user.update(transaction).await?;
if !update_user_attributes.is_empty() {
model::UserAttributes::insert_many(update_user_attributes)
.on_conflict(
OnConflict::columns([
model::UserAttributesColumn::UserId,
model::UserAttributesColumn::AttributeName,
])
.update_column(model::UserAttributesColumn::Value)
.to_owned(),
)
.exec(transaction)
.await?;
}
if !remove_user_attributes.is_empty() {
model::UserAttributes::delete_many()
.filter(model::UserAttributesColumn::UserId.eq(&request.user_id))
.filter(
model::UserAttributesColumn::AttributeName
.is_in(remove_user_attributes),
)
.exec(transaction)
.await?;
}
Ok(())
})
Box::pin(
async move { Self::update_user_with_transaction(transaction, request).await },
)
})
.await?;
Ok(())
@@ -397,14 +462,38 @@ mod tests {
let users = get_user_names(
&fixture.handler,
Some(UserRequestFilter::AttributeEquality(
"first_name".to_string(),
"first bob".to_string(),
AttributeName::from("first_name"),
Serialized::from("first bob"),
)),
)
.await;
assert_eq!(users, vec!["bob"]);
}
#[tokio::test]
async fn test_list_users_email_filter_uppercase_email() {
let fixture = TestFixture::new().await;
insert_user_no_password(&fixture.handler, "UppEr").await;
let users_and_emails = fixture
.handler
.list_users(
Some(UserRequestFilter::Equality(
UserColumn::Email,
"uPPer@bob.bob".to_string(),
)),
false,
)
.await
.unwrap()
.into_iter()
.map(|u| (u.user.user_id.to_string(), u.user.email.to_string()))
.collect::<Vec<_>>();
assert_eq!(
users_and_emails,
vec![("upper".to_owned(), "UppEr@bob.bob".to_owned())]
);
}
#[tokio::test]
async fn test_list_users_substring_filter() {
let fixture = TestFixture::new().await;
@@ -448,7 +537,7 @@ mod tests {
let fixture = TestFixture::new().await;
let users = get_user_names(
&fixture.handler,
Some(UserRequestFilter::MemberOf("Best Group".to_string())),
Some(UserRequestFilter::MemberOf("Best Group".into())),
)
.await;
assert_eq!(users, vec!["bob", "patrick"]);
@@ -460,7 +549,7 @@ mod tests {
let users = get_user_names(
&fixture.handler,
Some(UserRequestFilter::Or(vec![
UserRequestFilter::MemberOf("Best Group".to_string()),
UserRequestFilter::MemberOf("Best Group".into()),
UserRequestFilter::Equality(UserColumn::Uuid, "abc".to_string()),
])),
)
@@ -709,11 +798,13 @@ mod tests {
.handler
.update_user(UpdateUserRequest {
user_id: UserId::new("bob"),
email: Some("email".to_string()),
email: Some("email".into()),
display_name: Some("display_name".to_string()),
first_name: Some("first_name".to_string()),
last_name: Some("last_name".to_string()),
avatar: Some(JpegPhoto::for_tests()),
delete_attributes: Vec::new(),
insert_attributes: Vec::new(),
})
.await
.unwrap();
@@ -723,21 +814,21 @@ mod tests {
.get_user_details(&UserId::new("bob"))
.await
.unwrap();
assert_eq!(user.email, "email");
assert_eq!(user.email, "email".into());
assert_eq!(user.display_name.unwrap(), "display_name");
assert_eq!(
user.attributes,
vec![
AttributeValue {
name: "avatar".to_owned(),
name: "avatar".into(),
value: Serialized::from(&JpegPhoto::for_tests())
},
AttributeValue {
name: "first_name".to_owned(),
name: "first_name".into(),
value: Serialized::from("first_name")
},
AttributeValue {
name: "last_name".to_owned(),
name: "last_name".into(),
value: Serialized::from("last_name")
}
]
@@ -770,17 +861,129 @@ mod tests {
user.attributes,
vec![
AttributeValue {
name: "avatar".to_owned(),
name: "avatar".into(),
value: Serialized::from(&JpegPhoto::for_tests())
},
AttributeValue {
name: "first_name".to_owned(),
name: "first_name".into(),
value: Serialized::from("first bob")
}
]
);
}
#[tokio::test]
async fn test_update_user_insert_attribute() {
let fixture = TestFixture::new().await;
fixture
.handler
.update_user(UpdateUserRequest {
user_id: UserId::new("bob"),
first_name: None,
last_name: None,
avatar: None,
insert_attributes: vec![AttributeValue {
name: "first_name".into(),
value: Serialized::from("new first"),
}],
..Default::default()
})
.await
.unwrap();
let user = fixture
.handler
.get_user_details(&UserId::new("bob"))
.await
.unwrap();
assert_eq!(
user.attributes,
vec![
AttributeValue {
name: "first_name".into(),
value: Serialized::from("new first")
},
AttributeValue {
name: "last_name".into(),
value: Serialized::from("last bob")
}
]
);
}
#[tokio::test]
async fn test_update_user_delete_attribute() {
let fixture = TestFixture::new().await;
fixture
.handler
.update_user(UpdateUserRequest {
user_id: UserId::new("bob"),
first_name: None,
last_name: None,
avatar: None,
delete_attributes: vec!["first_name".into()],
..Default::default()
})
.await
.unwrap();
let user = fixture
.handler
.get_user_details(&UserId::new("bob"))
.await
.unwrap();
assert_eq!(
user.attributes,
vec![AttributeValue {
name: "last_name".into(),
value: Serialized::from("last bob")
}]
);
}
#[tokio::test]
async fn test_update_user_replace_attribute() {
let fixture = TestFixture::new().await;
fixture
.handler
.update_user(UpdateUserRequest {
user_id: UserId::new("bob"),
first_name: None,
last_name: None,
avatar: None,
delete_attributes: vec!["first_name".into()],
insert_attributes: vec![AttributeValue {
name: "first_name".into(),
value: Serialized::from("new first"),
}],
..Default::default()
})
.await
.unwrap();
let user = fixture
.handler
.get_user_details(&UserId::new("bob"))
.await
.unwrap();
assert_eq!(
user.attributes,
vec![
AttributeValue {
name: "first_name".into(),
value: Serialized::from("new first")
},
AttributeValue {
name: "last_name".into(),
value: Serialized::from("last bob")
},
]
);
}
#[tokio::test]
async fn test_update_user_delete_avatar() {
let fixture = TestFixture::new().await;
@@ -801,7 +1004,7 @@ mod tests {
.await
.unwrap();
let avatar = AttributeValue {
name: "avatar".to_owned(),
name: "avatar".into(),
value: Serialized::from(&JpegPhoto::for_tests()),
};
assert!(user.attributes.contains(&avatar));
@@ -831,11 +1034,15 @@ mod tests {
.handler
.create_user(CreateUserRequest {
user_id: UserId::new("james"),
email: "email".to_string(),
email: "email".into(),
display_name: Some("display_name".to_string()),
first_name: Some("first_name".to_string()),
first_name: None,
last_name: Some("last_name".to_string()),
avatar: Some(JpegPhoto::for_tests()),
attributes: vec![AttributeValue {
name: "first_name".into(),
value: Serialized::from("First Name"),
}],
})
.await
.unwrap();
@@ -845,21 +1052,21 @@ mod tests {
.get_user_details(&UserId::new("james"))
.await
.unwrap();
assert_eq!(user.email, "email");
assert_eq!(user.email, "email".into());
assert_eq!(user.display_name.unwrap(), "display_name");
assert_eq!(
user.attributes,
vec![
AttributeValue {
name: "avatar".to_owned(),
name: "avatar".into(),
value: Serialized::from(&JpegPhoto::for_tests())
},
AttributeValue {
name: "first_name".to_owned(),
value: Serialized::from("first_name")
name: "first_name".into(),
value: Serialized::from("First Name")
},
AttributeValue {
name: "last_name".to_owned(),
name: "last_name".into(),
value: Serialized::from("last_name")
}
]

View File

@@ -1,5 +1,8 @@
use std::cmp::Ordering;
use base64::Engine;
use chrono::{NaiveDateTime, TimeZone};
use lldap_auth::types::CaseInsensitiveString;
use sea_orm::{
entity::IntoActiveValue,
sea_query::{value::ValueType, ArrayType, BlobSize, ColumnType, Nullable, ValueTypeErr},
@@ -8,7 +11,9 @@ use sea_orm::{
use serde::{Deserialize, Serialize};
use strum::{EnumString, IntoStaticStr};
pub use super::model::{GroupColumn, UserColumn};
use super::handler::AttributeSchema;
pub use super::model::UserColumn;
pub use lldap_auth::types::UserId;
#[derive(PartialEq, Hash, Eq, Clone, Debug, Default, Serialize, Deserialize, DeriveValueType)]
#[serde(try_from = "&str")]
@@ -120,51 +125,161 @@ impl Serialized {
}
}
#[derive(
PartialEq, Eq, PartialOrd, Ord, Clone, Debug, Default, Serialize, Deserialize, DeriveValueType,
)]
#[serde(from = "String")]
pub struct UserId(String);
impl UserId {
pub fn new(user_id: &str) -> Self {
Self(user_id.to_lowercase())
fn compare_str_case_insensitive(s1: &str, s2: &str) -> Ordering {
let mut it_1 = s1.chars().flat_map(|c| c.to_lowercase());
let mut it_2 = s2.chars().flat_map(|c| c.to_lowercase());
loop {
match (it_1.next(), it_2.next()) {
(Some(c1), Some(c2)) => {
let o = c1.cmp(&c2);
if o != Ordering::Equal {
return o;
}
}
(None, Some(_)) => return Ordering::Less,
(Some(_), None) => return Ordering::Greater,
(None, None) => return Ordering::Equal,
}
}
}
macro_rules! make_case_insensitive_comparable_string {
($c:ident) => {
#[derive(Clone, Debug, Default, Serialize, Deserialize, DeriveValueType)]
pub struct $c(String);
impl PartialEq for $c {
fn eq(&self, other: &Self) -> bool {
compare_str_case_insensitive(&self.0, &other.0) == Ordering::Equal
}
}
impl Eq for $c {}
impl PartialOrd for $c {
fn partial_cmp(&self, other: &Self) -> Option<Ordering> {
Some(self.cmp(other))
}
}
impl Ord for $c {
fn cmp(&self, other: &Self) -> Ordering {
compare_str_case_insensitive(&self.0, &other.0)
}
}
impl std::hash::Hash for $c {
fn hash<H: std::hash::Hasher>(&self, state: &mut H) {
self.0.to_lowercase().hash(state)
}
}
impl $c {
pub fn new(raw: &str) -> Self {
Self(raw.to_owned())
}
pub fn as_str(&self) -> &str {
self.0.as_str()
}
pub fn into_string(self) -> String {
self.0
}
}
impl From<String> for $c {
fn from(s: String) -> Self {
Self(s)
}
}
impl From<&str> for $c {
fn from(s: &str) -> Self {
Self::new(s)
}
}
impl std::fmt::Display for $c {
fn fmt(&self, f: &mut std::fmt::Formatter) -> std::fmt::Result {
write!(f, "{}", self.0.as_str())
}
}
impl From<&$c> for Value {
fn from(user_id: &$c) -> Self {
user_id.as_str().into()
}
}
impl TryFromU64 for $c {
fn try_from_u64(_n: u64) -> Result<Self, DbErr> {
Err(DbErr::ConvertFromU64("$c cannot be constructed from u64"))
}
}
};
}
#[derive(
PartialEq,
Eq,
PartialOrd,
Ord,
Clone,
Debug,
Default,
Hash,
Serialize,
Deserialize,
DeriveValueType,
)]
#[serde(from = "CaseInsensitiveString")]
pub struct AttributeName(CaseInsensitiveString);
impl AttributeName {
pub fn new(s: &str) -> Self {
s.into()
}
pub fn as_str(&self) -> &str {
self.0.as_str()
}
pub fn into_string(self) -> String {
self.0
self.0.into_string()
}
}
impl std::fmt::Display for UserId {
impl<T> From<T> for AttributeName
where
T: Into<CaseInsensitiveString>,
{
fn from(s: T) -> Self {
Self(s.into())
}
}
impl std::fmt::Display for AttributeName {
fn fmt(&self, f: &mut std::fmt::Formatter) -> std::fmt::Result {
write!(f, "{}", self.0)
write!(f, "{}", self.0.as_str())
}
}
impl From<String> for UserId {
fn from(s: String) -> Self {
Self::new(&s)
impl From<&AttributeName> for Value {
fn from(attribute_name: &AttributeName) -> Self {
attribute_name.as_str().into()
}
}
impl From<&UserId> for Value {
fn from(user_id: &UserId) -> Self {
user_id.as_str().into()
}
}
impl TryFromU64 for UserId {
impl TryFromU64 for AttributeName {
fn try_from_u64(_n: u64) -> Result<Self, DbErr> {
Err(DbErr::ConvertFromU64(
"UserId cannot be constructed from u64",
"AttributeName cannot be constructed from u64",
))
}
}
make_case_insensitive_comparable_string!(Email);
make_case_insensitive_comparable_string!(GroupName);
impl AsRef<GroupName> for GroupName {
fn as_ref(&self) -> &GroupName {
self
}
}
#[derive(PartialEq, Eq, Clone, Serialize, Deserialize, DeriveValueType)]
#[sea_orm(column_type = "Binary(BlobSize::Long)", array_type = "Bytes")]
@@ -205,13 +320,11 @@ impl TryFrom<Vec<u8>> for JpegPhoto {
}
}
impl TryFrom<String> for JpegPhoto {
impl TryFrom<&str> for JpegPhoto {
type Error = anyhow::Error;
fn try_from(string: String) -> anyhow::Result<Self> {
fn try_from(string: &str) -> anyhow::Result<Self> {
// The String format is in base64.
<Self as TryFrom<_>>::try_from(
base64::engine::general_purpose::STANDARD.decode(string.as_str())?,
)
<Self as TryFrom<_>>::try_from(base64::engine::general_purpose::STANDARD.decode(string)?)
}
}
@@ -285,14 +398,14 @@ impl IntoActiveValue<Serialized> for JpegPhoto {
#[derive(PartialEq, Eq, Debug, Clone, Serialize, Deserialize, Hash)]
pub struct AttributeValue {
pub name: String,
pub name: AttributeName,
pub value: Serialized,
}
#[derive(PartialEq, Eq, Debug, Clone, Serialize, Deserialize)]
pub struct User {
pub user_id: UserId,
pub email: String,
pub email: Email,
pub display_name: Option<String>,
pub creation_date: NaiveDateTime,
pub uuid: Uuid,
@@ -305,7 +418,7 @@ impl Default for User {
let epoch = chrono::Utc.timestamp_opt(0, 0).unwrap().naive_utc();
User {
user_id: UserId::default(),
email: String::new(),
email: Email::default(),
display_name: None,
creation_date: epoch,
uuid: Uuid::from_name_and_date("", &epoch),
@@ -342,7 +455,17 @@ impl From<&GroupId> for Value {
}
#[derive(
Debug, Copy, Clone, PartialEq, Eq, Hash, Serialize, Deserialize, EnumString, IntoStaticStr,
Debug,
Copy,
Clone,
PartialEq,
Eq,
Hash,
Serialize,
Deserialize,
EnumString,
IntoStaticStr,
juniper::GraphQLEnum,
)]
pub enum AttributeType {
String,
@@ -389,7 +512,7 @@ impl ValueType for AttributeType {
#[derive(PartialEq, Eq, Debug, Serialize, Deserialize)]
pub struct Group {
pub id: GroupId,
pub display_name: String,
pub display_name: GroupName,
pub creation_date: NaiveDateTime,
pub uuid: Uuid,
pub users: Vec<UserId>,
@@ -399,7 +522,7 @@ pub struct Group {
#[derive(Debug, Clone, PartialEq, Eq, Hash, Serialize, Deserialize)]
pub struct GroupDetails {
pub group_id: GroupId,
pub display_name: String,
pub display_name: GroupName,
pub creation_date: NaiveDateTime,
pub uuid: Uuid,
pub attributes: Vec<AttributeValue>,
@@ -411,6 +534,38 @@ pub struct UserAndGroups {
pub groups: Option<Vec<GroupDetails>>,
}
#[derive(Debug, Clone, PartialEq, Eq)]
pub struct AttributeValueAndSchema {
pub value: AttributeValue,
pub schema: AttributeSchema,
}
#[derive(Debug, Clone, PartialEq, Eq)]
pub struct UserAndSchema {
pub user: User,
pub schema: Vec<AttributeSchema>,
}
#[derive(Debug, Clone, PartialEq, Eq)]
pub struct GroupAndSchema {
pub group: Group,
pub schema: Vec<AttributeSchema>,
}
#[derive(Debug, Clone, PartialEq, Eq)]
pub struct GroupDetailsAndSchema {
pub group: GroupDetails,
pub schema: Vec<AttributeSchema>,
}
#[derive(Debug, Clone, PartialEq, Eq)]
pub struct UserAndGroupsAndSchema {
pub user: User,
pub user_schema: Vec<AttributeSchema>,
pub group: Option<Vec<GroupDetails>>,
pub group_schema: Vec<AttributeSchema>,
}
#[cfg(test)]
mod tests {
use super::*;

View File

@@ -6,12 +6,13 @@ use tracing::info;
use crate::domain::{
error::Result,
handler::{
AttributeSchema, BackendHandler, CreateUserRequest, GroupBackendHandler,
GroupListerBackendHandler, GroupRequestFilter, Schema, SchemaBackendHandler,
UpdateGroupRequest, UpdateUserRequest, UserBackendHandler, UserListerBackendHandler,
UserRequestFilter,
AttributeSchema, BackendHandler, CreateAttributeRequest, CreateGroupRequest,
CreateUserRequest, GroupBackendHandler, GroupListerBackendHandler, GroupRequestFilter,
ReadSchemaBackendHandler, Schema, SchemaBackendHandler, UpdateGroupRequest,
UpdateUserRequest, UserBackendHandler, UserListerBackendHandler, UserRequestFilter,
},
types::{Group, GroupDetails, GroupId, User, UserAndGroups, UserId},
schema::PublicSchema,
types::{AttributeName, Group, GroupDetails, GroupId, GroupName, User, UserAndGroups, UserId},
};
#[derive(Clone, Copy, PartialEq, Eq, Debug)]
@@ -71,9 +72,10 @@ impl ValidationResults {
}
#[async_trait]
pub trait UserReadableBackendHandler {
pub trait UserReadableBackendHandler: ReadSchemaBackendHandler {
async fn get_user_details(&self, user_id: &UserId) -> Result<User>;
async fn get_user_groups(&self, user_id: &UserId) -> Result<HashSet<GroupDetails>>;
async fn get_schema(&self) -> Result<PublicSchema>;
}
#[async_trait]
@@ -94,15 +96,22 @@ pub trait UserWriteableBackendHandler: UserReadableBackendHandler {
#[async_trait]
pub trait AdminBackendHandler:
UserWriteableBackendHandler + ReadonlyBackendHandler + UserWriteableBackendHandler
UserWriteableBackendHandler
+ ReadonlyBackendHandler
+ UserWriteableBackendHandler
+ SchemaBackendHandler
{
async fn create_user(&self, request: CreateUserRequest) -> Result<()>;
async fn delete_user(&self, user_id: &UserId) -> Result<()>;
async fn add_user_to_group(&self, user_id: &UserId, group_id: GroupId) -> Result<()>;
async fn remove_user_from_group(&self, user_id: &UserId, group_id: GroupId) -> Result<()>;
async fn update_group(&self, request: UpdateGroupRequest) -> Result<()>;
async fn create_group(&self, group_name: &str) -> Result<GroupId>;
async fn create_group(&self, request: CreateGroupRequest) -> Result<GroupId>;
async fn delete_group(&self, group_id: GroupId) -> Result<()>;
async fn add_user_attribute(&self, request: CreateAttributeRequest) -> Result<()>;
async fn add_group_attribute(&self, request: CreateAttributeRequest) -> Result<()>;
async fn delete_user_attribute(&self, name: &AttributeName) -> Result<()>;
async fn delete_group_attribute(&self, name: &AttributeName) -> Result<()>;
}
#[async_trait]
@@ -113,6 +122,11 @@ impl<Handler: BackendHandler> UserReadableBackendHandler for Handler {
async fn get_user_groups(&self, user_id: &UserId) -> Result<HashSet<GroupDetails>> {
<Handler as UserBackendHandler>::get_user_groups(self, user_id).await
}
async fn get_schema(&self) -> Result<PublicSchema> {
Ok(PublicSchema::from(
<Handler as ReadSchemaBackendHandler>::get_schema(self).await?,
))
}
}
#[async_trait]
@@ -155,12 +169,24 @@ impl<Handler: BackendHandler> AdminBackendHandler for Handler {
async fn update_group(&self, request: UpdateGroupRequest) -> Result<()> {
<Handler as GroupBackendHandler>::update_group(self, request).await
}
async fn create_group(&self, group_name: &str) -> Result<GroupId> {
<Handler as GroupBackendHandler>::create_group(self, group_name).await
async fn create_group(&self, request: CreateGroupRequest) -> Result<GroupId> {
<Handler as GroupBackendHandler>::create_group(self, request).await
}
async fn delete_group(&self, group_id: GroupId) -> Result<()> {
<Handler as GroupBackendHandler>::delete_group(self, group_id).await
}
async fn add_user_attribute(&self, request: CreateAttributeRequest) -> Result<()> {
<Handler as SchemaBackendHandler>::add_user_attribute(self, request).await
}
async fn add_group_attribute(&self, request: CreateAttributeRequest) -> Result<()> {
<Handler as SchemaBackendHandler>::add_group_attribute(self, request).await
}
async fn delete_user_attribute(&self, name: &AttributeName) -> Result<()> {
<Handler as SchemaBackendHandler>::delete_user_attribute(self, name).await
}
async fn delete_group_attribute(&self, name: &AttributeName) -> Result<()> {
<Handler as SchemaBackendHandler>::delete_group_attribute(self, name).await
}
}
pub struct AccessControlledBackendHandler<Handler> {
@@ -238,19 +264,23 @@ impl<Handler: BackendHandler> AccessControlledBackendHandler<Handler> {
Ok(self.get_permissions_from_groups(user_id, user_groups.iter().map(|g| &g.display_name)))
}
pub fn get_permissions_from_groups<'a, Groups: Iterator<Item = &'a String> + Clone + 'a>(
pub fn get_permissions_from_groups<Groups, T>(
&self,
user_id: UserId,
groups: Groups,
) -> ValidationResults {
let is_in_group = |name| groups.clone().any(|g| g == name);
) -> ValidationResults
where
Groups: Iterator<Item = T> + Clone,
T: AsRef<GroupName>,
{
let is_in_group = |name: GroupName| groups.clone().any(|g| *g.as_ref() == name);
ValidationResults {
user: user_id,
permission: if is_in_group("lldap_admin") {
permission: if is_in_group("lldap_admin".into()) {
Permission::Admin
} else if is_in_group("lldap_password_manager") {
} else if is_in_group("lldap_password_manager".into()) {
Permission::PasswordManager
} else if is_in_group("lldap_strict_readonly") {
} else if is_in_group("lldap_strict_readonly".into()) {
Permission::Readonly
} else {
Permission::Regular
@@ -265,7 +295,7 @@ pub struct UserRestrictedListerBackendHandler<'a, Handler> {
}
#[async_trait]
impl<'a, Handler: SchemaBackendHandler + Sync> SchemaBackendHandler
impl<'a, Handler: ReadSchemaBackendHandler + Sync> ReadSchemaBackendHandler
for UserRestrictedListerBackendHandler<'a, Handler>
{
async fn get_schema(&self) -> Result<Schema> {

View File

@@ -1,8 +1,3 @@
use std::collections::{hash_map::DefaultHasher, HashSet};
use std::hash::{Hash, Hasher};
use std::pin::Pin;
use std::task::{Context, Poll};
use actix_web::{
cookie::{Cookie, SameSite},
dev::{Service, ServiceRequest, ServiceResponse, Transform},
@@ -17,6 +12,12 @@ use futures_util::FutureExt;
use hmac::Hmac;
use jwt::{SignWithKey, VerifyWithKey};
use sha2::Sha512;
use std::{
collections::HashSet,
hash::Hash,
pin::Pin,
task::{Context, Poll},
};
use time::ext::NumericalDuration;
use tracing::{debug, info, instrument, warn};
@@ -27,7 +28,7 @@ use crate::{
error::DomainError,
handler::{BackendHandler, BindRequest, LoginHandler, UserRequestFilter},
opaque_handler::OpaqueHandler,
types::{GroupDetails, UserColumn, UserId},
types::{GroupDetails, GroupName, UserColumn, UserId},
},
infra::{
access_control::{ReadonlyBackendHandler, UserReadableBackendHandler, ValidationResults},
@@ -39,31 +40,46 @@ use crate::{
type Token<S> = jwt::Token<jwt::Header, JWTClaims, S>;
type SignedToken = Token<jwt::token::Signed>;
fn create_jwt(key: &Hmac<Sha512>, user: String, groups: HashSet<GroupDetails>) -> SignedToken {
fn default_hash<T: Hash + ?Sized>(token: &T) -> u64 {
use std::collections::hash_map::DefaultHasher;
use std::hash::Hasher;
let mut s = DefaultHasher::new();
token.hash(&mut s);
s.finish()
}
async fn create_jwt<Handler: TcpBackendHandler>(
handler: &Handler,
key: &Hmac<Sha512>,
user: &UserId,
groups: HashSet<GroupDetails>,
) -> SignedToken {
let claims = JWTClaims {
exp: Utc::now() + chrono::Duration::days(1),
iat: Utc::now(),
user,
groups: groups.into_iter().map(|g| g.display_name).collect(),
user: user.to_string(),
groups: groups
.into_iter()
.map(|g| g.display_name.into_string())
.collect(),
};
let expiry = claims.exp.naive_utc();
let header = jwt::Header {
algorithm: jwt::AlgorithmType::Hs512,
..Default::default()
};
jwt::Token::new(header, claims).sign_with_key(key).unwrap()
let token = jwt::Token::new(header, claims).sign_with_key(key).unwrap();
handler
.register_jwt(user, default_hash(token.as_str()), expiry)
.await
.unwrap();
token
}
fn parse_refresh_token(token: &str) -> TcpResult<(u64, UserId)> {
match token.split_once('+') {
None => Err(DomainError::AuthenticationError("Invalid refresh token".to_string()).into()),
Some((token, u)) => {
let refresh_token_hash = {
let mut s = DefaultHasher::new();
token.hash(&mut s);
s.finish()
};
Ok((refresh_token_hash, UserId::new(u)))
}
Some((token, u)) => Ok((default_hash(token), UserId::new(u))),
}
}
@@ -99,26 +115,25 @@ where
"Invalid refresh token".to_string(),
)));
}
Ok(data
.get_readonly_handler()
.get_user_groups(&user)
.await
.map(|groups| create_jwt(jwt_key, user.to_string(), groups))
.map(|token| {
HttpResponse::Ok()
.cookie(
Cookie::build("token", token.as_str())
.max_age(1.days())
.path("/")
.http_only(true)
.same_site(SameSite::Strict)
.finish(),
)
.json(&login::ServerLoginResponse {
token: token.as_str().to_owned(),
refresh_token: None,
})
})?)
let mut path = data.server_url.path().to_string();
if !path.ends_with('/') {
path.push('/');
};
let groups = data.get_readonly_handler().get_user_groups(&user).await?;
let token = create_jwt(data.get_tcp_handler(), jwt_key, &user, groups).await;
Ok(HttpResponse::Ok()
.cookie(
Cookie::build("token", token.as_str())
.max_age(1.days())
.path(&path)
.http_only(true)
.same_site(SameSite::Strict)
.finish(),
)
.json(&login::ServerLoginResponse {
token: token.as_str().to_owned(),
refresh_token: None,
}))
}
async fn get_refresh_handler<Backend>(
@@ -175,7 +190,7 @@ where
user.display_name
.as_deref()
.unwrap_or_else(|| user.user_id.as_str()),
&user.email,
user.email.as_str(),
&token,
&data.server_url,
&data.mail_options,
@@ -230,13 +245,17 @@ where
.delete_password_reset_token(token)
.await;
let groups = HashSet::new();
let token = create_jwt(&data.jwt_key, user_id.to_string(), groups);
let token = create_jwt(data.get_tcp_handler(), &data.jwt_key, &user_id, groups).await;
let mut path = data.server_url.path().to_string();
if !path.ends_with('/') {
path.push('/');
};
Ok(HttpResponse::Ok()
.cookie(
Cookie::build("token", token.as_str())
.max_age(5.minutes())
// Cookie is only valid to reset the password.
.path("/auth")
.path(format!("{}auth", path))
.http_only(true)
.same_site(SameSite::Strict)
.finish(),
@@ -271,16 +290,20 @@ where
data.get_tcp_handler()
.delete_refresh_token(refresh_token_hash)
.await?;
let new_blacklisted_jwts = data.get_tcp_handler().blacklist_jwts(&user).await?;
let new_blacklisted_jwt_hashes = data.get_tcp_handler().blacklist_jwts(&user).await?;
let mut jwt_blacklist = data.jwt_blacklist.write().unwrap();
for jwt in new_blacklisted_jwts {
jwt_blacklist.insert(jwt);
for jwt_hash in new_blacklisted_jwt_hashes {
jwt_blacklist.insert(jwt_hash);
}
let mut path = data.server_url.path().to_string();
if !path.ends_with('/') {
path.push('/');
};
Ok(HttpResponse::Ok()
.cookie(
Cookie::build("token", "")
.max_age(0.days())
.path("/")
.path(&path)
.http_only(true)
.same_site(SameSite::Strict)
.finish(),
@@ -288,7 +311,7 @@ where
.cookie(
Cookie::build("refresh_token", "")
.max_age(0.days())
.path("/auth")
.path(format!("{}auth", path))
.http_only(true)
.same_site(SameSite::Strict)
.finish(),
@@ -341,14 +364,17 @@ where
// token.
let groups = data.get_readonly_handler().get_user_groups(name).await?;
let (refresh_token, max_age) = data.get_tcp_handler().create_refresh_token(name).await?;
let token = create_jwt(&data.jwt_key, name.to_string(), groups);
let token = create_jwt(data.get_tcp_handler(), &data.jwt_key, name, groups).await;
let refresh_token_plus_name = refresh_token + "+" + name.as_str();
let mut path = data.server_url.path().to_string();
if !path.ends_with('/') {
path.push('/');
};
Ok(HttpResponse::Ok()
.cookie(
Cookie::build("token", token.as_str())
.max_age(1.days())
.path("/")
.path(&path)
.http_only(true)
.same_site(SameSite::Strict)
.finish(),
@@ -356,7 +382,7 @@ where
.cookie(
Cookie::build("refresh_token", refresh_token_plus_name.clone())
.max_age(max_age.num_days().days())
.path("/auth")
.path(format!("{}auth", path))
.http_only(true)
.same_site(SameSite::Strict)
.finish(),
@@ -402,13 +428,13 @@ async fn simple_login<Backend>(
where
Backend: TcpBackendHandler + BackendHandler + OpaqueHandler + LoginHandler + 'static,
{
let user_id = UserId::new(&request.username);
let login::ClientSimpleLoginRequest { username, password } = request.into_inner();
let bind_request = BindRequest {
name: user_id.clone(),
password: request.password.clone(),
name: username.clone(),
password,
};
data.get_login_handler().bind(bind_request).await?;
get_login_successful_response(&data, &user_id).await
get_login_successful_response(&data, &username).await
}
async fn simple_login_handler<Backend>(
@@ -474,14 +500,14 @@ where
.await
.map_err(|e| TcpError::BadRequest(format!("{:#?}", e)))?
.into_inner();
let user_id = UserId::new(&registration_start_request.username);
let user_id = &registration_start_request.username;
let user_is_admin = data
.get_readonly_handler()
.get_user_groups(&user_id)
.get_user_groups(user_id)
.await?
.iter()
.any(|g| g.display_name == "lldap_admin");
if !validation_result.can_change_password(&user_id, user_is_admin) {
.any(|g| g.display_name == "lldap_admin".into());
if !validation_result.can_change_password(user_id, user_is_admin) {
return Err(TcpError::UnauthorizedError(
"Not authorized to change the user's password".to_string(),
));
@@ -604,17 +630,17 @@ pub(crate) fn check_if_token_is_valid<Backend: BackendHandler>(
token.header().algorithm
)));
}
let jwt_hash = {
let mut s = DefaultHasher::new();
token_str.hash(&mut s);
s.finish()
};
let jwt_hash = default_hash(token_str);
if state.jwt_blacklist.read().unwrap().contains(&jwt_hash) {
return Err(ErrorUnauthorized("JWT was logged out"));
}
Ok(state.backend_handler.get_permissions_from_groups(
UserId::new(&token.claims().user),
token.claims().groups.iter(),
token
.claims()
.groups
.iter()
.map(|s| GroupName::from(s.as_str())),
))
}

View File

@@ -89,6 +89,14 @@ pub struct RunOpts {
#[clap(short, long, env = "LLDAP_DATABASE_URL")]
pub database_url: Option<String>,
/// Force admin password reset to the config value.
#[clap(long, env = "LLDAP_FORCE_LADP_USER_PASS_RESET")]
pub force_ldap_user_pass_reset: Option<bool>,
/// Force update of the private key after a key change.
#[clap(long, env = "LLDAP_FORCE_UPDATE_PRIVATE_KEY")]
pub force_update_private_key: Option<bool>,
#[clap(flatten)]
pub smtp_opts: SmtpOpts,

View File

@@ -1,12 +1,16 @@
use crate::{
domain::types::UserId,
domain::{
sql_tables::{ConfigLocation, PrivateKeyHash, PrivateKeyInfo, PrivateKeyLocation},
types::{AttributeName, UserId},
},
infra::cli::{GeneralConfigOpts, LdapsOpts, RunOpts, SmtpEncryption, SmtpOpts, TestEmailOpts},
};
use anyhow::{Context, Result};
use anyhow::{bail, Context, Result};
use figment::{
providers::{Env, Format, Serialized, Toml},
Figment,
};
use figment_file_provider_adapter::FileAdapter;
use lettre::message::Mailbox;
use lldap_auth::opaque::{server::ServerSetup, KeyPair};
use secstr::SecUtf8;
@@ -83,12 +87,16 @@ pub struct Configuration {
pub ldap_user_email: String,
#[builder(default = r#"SecUtf8::from("password")"#)]
pub ldap_user_pass: SecUtf8,
#[builder(default = "false")]
pub force_ldap_user_pass_reset: bool,
#[builder(default = "false")]
pub force_update_private_key: bool,
#[builder(default = r#"String::from("sqlite://users.db?mode=rwc")"#)]
pub database_url: String,
#[builder(default)]
pub ignored_user_attributes: Vec<String>,
pub ignored_user_attributes: Vec<AttributeName>,
#[builder(default)]
pub ignored_group_attributes: Vec<String>,
pub ignored_group_attributes: Vec<AttributeName>,
#[builder(default = "false")]
pub verbose: bool,
#[builder(default = r#"String::from("server_key")"#)]
@@ -105,7 +113,7 @@ pub struct Configuration {
pub http_url: Url,
#[serde(skip)]
#[builder(field(private), default = "None")]
server_setup: Option<ServerSetup>,
server_setup: Option<ServerSetupConfig>,
}
impl std::default::Default for Configuration {
@@ -123,6 +131,7 @@ impl ConfigurationBuilder {
.and_then(|o| o.as_ref())
.map(SecUtf8::unsecure)
.unwrap_or_default(),
PrivateKeyLocation::Default,
)?;
Ok(self.server_setup(Some(server_setup)).private_build()?)
}
@@ -131,20 +140,85 @@ impl ConfigurationBuilder {
pub fn for_tests() -> Configuration {
ConfigurationBuilder::default()
.verbose(true)
.server_setup(Some(generate_random_private_key()))
.server_setup(Some(ServerSetupConfig {
server_setup: generate_random_private_key(),
private_key_location: PrivateKeyLocation::Tests,
}))
.private_build()
.unwrap()
}
}
fn stable_hash(val: &[u8]) -> [u8; 32] {
use sha2::{Digest, Sha256};
let mut hasher = Sha256::new();
hasher.update(val);
hasher.finalize().into()
}
impl Configuration {
pub fn get_server_setup(&self) -> &ServerSetup {
self.server_setup.as_ref().unwrap()
&self.server_setup.as_ref().unwrap().server_setup
}
pub fn get_server_keys(&self) -> &KeyPair {
self.get_server_setup().keypair()
}
pub fn get_private_key_info(&self) -> PrivateKeyInfo {
PrivateKeyInfo {
private_key_hash: PrivateKeyHash(stable_hash(self.get_server_keys().private())),
private_key_location: self
.server_setup
.as_ref()
.unwrap()
.private_key_location
.clone(),
}
}
}
/// Returns whether the private key is entirely new.
pub fn compare_private_key_hashes(
previous_info: Option<&PrivateKeyInfo>,
private_key_info: &PrivateKeyInfo,
) -> Result<bool> {
match previous_info {
None => Ok(true),
Some(previous_info) => {
if previous_info.private_key_hash == private_key_info.private_key_hash {
Ok(false)
} else {
match (
&previous_info.private_key_location,
&private_key_info.private_key_location,
) {
(
PrivateKeyLocation::KeyFile(old_location, file_path),
PrivateKeyLocation::KeySeed(new_location),
) => {
bail!("The private key is configured to be generated from a seed (from {new_location:?}), but it used to come from the file {file_path:?} (defined in {old_location:?}). Did you just upgrade from <=v0.4 to >=v0.5? The key seed was not supported, revert to just using the file.");
}
(PrivateKeyLocation::Default, PrivateKeyLocation::KeySeed(new_location)) => {
bail!("The private key is configured to be generated from a seed (from {new_location:?}), but it used to come from default key file \"server_key\". Did you just upgrade from <=v0.4 to >=v0.5? The key seed was not yet supported, revert to just using the file.");
}
(
PrivateKeyLocation::KeyFile(old_location, old_path),
PrivateKeyLocation::KeyFile(new_location, new_path),
) => {
if old_path == new_path {
bail!("The contents of the private key file from {old_path:?} have changed. This usually means that the file was deleted and re-created. If using docker, make sure that the folder is made persistent (by mounting a volume or a directory). If you have several instances of LLDAP, make sure they share the same file (or switch to a key seed).");
} else {
bail!("The private key file used to be {old_path:?} (defined in {old_location:?}), but now is {new_path:?} (defined in {new_location:?}. Make sure to copy the old file in the new location.");
}
}
(old_location, new_location) => {
bail!("The private key has changed. It used to come from {old_location:?}, but now it comes from {new_location:?}.");
}
}
}
}
}
}
fn generate_random_private_key() -> ServerSetup {
@@ -166,34 +240,129 @@ fn write_to_readonly_file(path: &std::path::Path, buffer: &[u8]) -> Result<()> {
Ok(file.write_all(buffer)?)
}
fn get_server_setup(file_path: &str, key_seed: &str) -> Result<ServerSetup> {
#[derive(Debug, Clone)]
pub struct ServerSetupConfig {
server_setup: ServerSetup,
private_key_location: PrivateKeyLocation,
}
#[derive(derive_more::From)]
enum PrivateKeyLocationOrFigment {
Figment(Figment),
PrivateKeyLocation(PrivateKeyLocation),
}
impl PrivateKeyLocationOrFigment {
fn for_key_seed(&self) -> PrivateKeyLocation {
match self {
PrivateKeyLocationOrFigment::Figment(config) => {
match config.find_metadata("key_seed") {
Some(figment::Metadata {
source: Some(figment::Source::File(path)),
..
}) => PrivateKeyLocation::KeySeed(ConfigLocation::ConfigFile(
path.to_string_lossy().to_string(),
)),
Some(figment::Metadata {
source: None, name, ..
}) => PrivateKeyLocation::KeySeed(ConfigLocation::EnvironmentVariable(
name.clone().to_string(),
)),
None
| Some(figment::Metadata {
source: Some(figment::Source::Code(_)),
..
}) => PrivateKeyLocation::Default,
other => panic!("Unexpected config location: {:?}", other),
}
}
PrivateKeyLocationOrFigment::PrivateKeyLocation(PrivateKeyLocation::KeyFile(
config_location,
_,
)) => {
panic!("Unexpected location: {:?}", config_location)
}
PrivateKeyLocationOrFigment::PrivateKeyLocation(location) => location.clone(),
}
}
fn for_key_file(&self, server_key_file: &str) -> PrivateKeyLocation {
match self {
PrivateKeyLocationOrFigment::Figment(config) => {
match config.find_metadata("key_file") {
Some(figment::Metadata {
source: Some(figment::Source::File(path)),
..
}) => PrivateKeyLocation::KeyFile(
ConfigLocation::ConfigFile(path.to_string_lossy().to_string()),
server_key_file.into(),
),
Some(figment::Metadata {
source: None, name, ..
}) => PrivateKeyLocation::KeyFile(
ConfigLocation::EnvironmentVariable(name.to_string()),
server_key_file.into(),
),
None
| Some(figment::Metadata {
source: Some(figment::Source::Code(_)),
..
}) => PrivateKeyLocation::Default,
other => panic!("Unexpected config location: {:?}", other),
}
}
PrivateKeyLocationOrFigment::PrivateKeyLocation(PrivateKeyLocation::KeySeed(file)) => {
panic!("Unexpected location: {:?}", file)
}
PrivateKeyLocationOrFigment::PrivateKeyLocation(location) => location.clone(),
}
}
}
fn get_server_setup<L: Into<PrivateKeyLocationOrFigment>>(
file_path: &str,
key_seed: &str,
private_key_location: L,
) -> Result<ServerSetupConfig> {
let private_key_location = private_key_location.into();
use std::fs::read;
let path = std::path::Path::new(file_path);
if !key_seed.is_empty() {
if file_path != "server_key" || path.exists() {
if path.exists() {
bail!(
"A key_seed was given, but a key file already exists at `{}`. Which one to use is ambiguous, aborting.\nNote: If you just migrated from <=v0.4 to >=v0.5, the previous version did not support key_seed, so it was falling back onto a key file. Remove the seed from the configuration.",
file_path
);
} else if file_path == "server_key" {
eprintln!("WARNING: A key_seed was given, we will ignore the server_key and generate one from the seed!");
} else {
println!("Got a key_seed, ignoring key_file");
println!("Generating the key from the key_seed");
}
let hash = |val: &[u8]| -> [u8; 32] {
use sha2::{Digest, Sha256};
let mut seed_hasher = Sha256::new();
seed_hasher.update(val);
seed_hasher.finalize().into()
};
use rand::SeedableRng;
let mut rng = rand_chacha::ChaCha20Rng::from_seed(hash(key_seed.as_bytes()));
Ok(ServerSetup::new(&mut rng))
let mut rng = rand_chacha::ChaCha20Rng::from_seed(stable_hash(key_seed.as_bytes()));
Ok(ServerSetupConfig {
server_setup: ServerSetup::new(&mut rng),
private_key_location: private_key_location.for_key_seed(),
})
} else if path.exists() {
let bytes = read(file_path).context(format!("Could not read key file `{}`", file_path))?;
Ok(ServerSetup::deserialize(&bytes)?)
Ok(ServerSetupConfig {
server_setup: ServerSetup::deserialize(&bytes).context(format!(
"while parsing the contents of the `{}` file",
file_path
))?,
private_key_location: private_key_location.for_key_file(file_path),
})
} else {
let server_setup = generate_random_private_key();
write_to_readonly_file(path, &server_setup.serialize()).context(format!(
"Could not write the generated server setup to file `{}`",
file_path,
))?;
Ok(server_setup)
Ok(ServerSetupConfig {
server_setup,
private_key_location: private_key_location.for_key_file(file_path),
})
}
}
@@ -244,6 +413,14 @@ impl ConfigOverrider for RunOpts {
if let Some(database_url) = self.database_url.as_ref() {
config.database_url = database_url.to_string();
}
if let Some(force_ldap_user_pass_reset) = self.force_ldap_user_pass_reset {
config.force_ldap_user_pass_reset = force_ldap_user_pass_reset;
}
if let Some(force_update_private_key) = self.force_update_private_key {
config.force_update_private_key = force_update_private_key;
}
self.smtp_opts.override_config(config);
self.ldaps_opts.override_config(config);
}
@@ -317,21 +494,20 @@ pub fn init<C>(overrides: C) -> Result<Configuration>
where
C: TopLevelCommandOpts + ConfigOverrider,
{
let config_file = overrides.general_config().config_file.clone();
println!(
"Loading configuration from {}",
overrides.general_config().config_file
&overrides.general_config().config_file
);
use figment_file_provider_adapter::FileAdapter;
let ignore_keys = ["key_file", "cert_file"];
let mut config: Configuration = Figment::from(Serialized::defaults(
let figment_config = Figment::from(Serialized::defaults(
ConfigurationBuilder::default().private_build().unwrap(),
))
.merge(FileAdapter::wrap(Toml::file(config_file)).ignore(&ignore_keys))
.merge(FileAdapter::wrap(Env::prefixed("LLDAP_").split("__")).ignore(&ignore_keys))
.extract()?;
.merge(
FileAdapter::wrap(Toml::file(&overrides.general_config().config_file)).ignore(&ignore_keys),
)
.merge(FileAdapter::wrap(Env::prefixed("LLDAP_").split("__")).ignore(&ignore_keys));
let mut config: Configuration = figment_config.extract()?;
overrides.override_config(&mut config);
if config.verbose {
@@ -344,6 +520,7 @@ where
.as_ref()
.map(SecUtf8::unsecure)
.unwrap_or_default(),
figment_config,
)?);
if config.jwt_secret == SecUtf8::from("secretjwtsecret") {
println!("WARNING: Default JWT secret used! This is highly unsafe and can allow attackers to log in as admin.");
@@ -360,12 +537,19 @@ where
#[cfg(test)]
mod tests {
use super::*;
use clap::Parser;
use figment::Jail;
use pretty_assertions::assert_eq;
#[test]
fn check_generated_server_key() {
assert_eq!(
bincode::serialize(&get_server_setup("/doesnt/exist", "key seed").unwrap()).unwrap(),
bincode::serialize(
&get_server_setup("/doesnt/exist", "key seed", PrivateKeyLocation::Tests)
.unwrap()
.server_setup
)
.unwrap(),
[
255, 206, 202, 50, 247, 13, 59, 191, 69, 244, 148, 187, 150, 227, 12, 250, 20, 207,
211, 151, 147, 33, 107, 132, 2, 252, 121, 94, 97, 6, 97, 232, 163, 168, 86, 246,
@@ -382,4 +566,153 @@ mod tests {
]
);
}
fn default_run_opts() -> RunOpts {
RunOpts::parse_from::<_, std::ffi::OsString>([])
}
fn write_random_key(jail: &Jail, file: &str) {
use std::io::Write;
let file = std::fs::File::create(jail.directory().join(file)).unwrap();
let mut writer = std::io::BufWriter::new(file);
writer
.write_all(&generate_random_private_key().serialize())
.unwrap();
}
#[test]
fn figment_location_extraction_key_file() {
Jail::expect_with(|jail| {
jail.create_file("lldap_config.toml", r#"key_file = "test""#)?;
jail.set_env("LLDAP_KEY_SEED", "a123");
let ignore_keys = ["key_file", "cert_file"];
let figment_config = Figment::from(Serialized::defaults(
ConfigurationBuilder::default().private_build().unwrap(),
))
.merge(FileAdapter::wrap(Toml::file("lldap_config.toml")).ignore(&ignore_keys))
.merge(FileAdapter::wrap(Env::prefixed("LLDAP_").split("__")).ignore(&ignore_keys));
assert_eq!(
PrivateKeyLocationOrFigment::Figment(figment_config).for_key_file("path"),
PrivateKeyLocation::KeyFile(
ConfigLocation::ConfigFile(
jail.directory()
.join("lldap_config.toml")
.to_string_lossy()
.to_string()
),
"path".into()
)
);
Ok(())
});
}
#[test]
fn check_server_setup_key_extraction_seed_success_with_nonexistant_file() {
Jail::expect_with(|jail| {
jail.create_file("lldap_config.toml", r#"key_file = "test""#)?;
jail.set_env("LLDAP_KEY_SEED", "a123");
init(default_run_opts()).unwrap();
Ok(())
});
}
#[test]
fn check_server_setup_key_extraction_seed_failure_with_existing_file() {
Jail::expect_with(|jail| {
jail.create_file("lldap_config.toml", r#"key_file = "test""#)?;
jail.set_env("LLDAP_KEY_SEED", "a123");
write_random_key(jail, "test");
init(default_run_opts()).unwrap_err();
Ok(())
});
}
#[test]
fn check_server_setup_key_extraction_file_success_with_existing_file() {
Jail::expect_with(|jail| {
jail.create_file("lldap_config.toml", r#"key_file = "test""#)?;
write_random_key(jail, "test");
init(default_run_opts()).unwrap();
Ok(())
});
}
#[test]
fn check_server_setup_key_extraction_file_success_with_nonexistent_file() {
Jail::expect_with(|jail| {
jail.create_file("lldap_config.toml", r#"key_file = "test""#)?;
init(default_run_opts()).unwrap();
Ok(())
});
}
#[test]
fn check_server_setup_key_extraction_file_with_previous_different_file() {
Jail::expect_with(|jail| {
jail.create_file("lldap_config.toml", r#"key_file = "test""#)?;
write_random_key(jail, "test");
let config = init(default_run_opts()).unwrap();
let info = config.get_private_key_info();
write_random_key(jail, "test");
let new_config = init(default_run_opts()).unwrap();
let error_message =
compare_private_key_hashes(Some(&info), &new_config.get_private_key_info())
.unwrap_err()
.to_string();
if let PrivateKeyLocation::KeyFile(_, file) = info.private_key_location {
assert!(
error_message.contains(
"The contents of the private key file from \"test\" have changed"
),
"{error_message}"
);
assert_eq!(file, "test");
} else {
panic!(
"Unexpected private key location: {:?}",
info.private_key_location
);
}
Ok(())
});
}
#[test]
fn check_server_setup_key_extraction_file_to_seed() {
Jail::expect_with(|jail| {
jail.create_file("lldap_config.toml", "")?;
write_random_key(jail, "server_key");
init(default_run_opts()).unwrap();
jail.create_file("lldap_config.toml", r#"key_seed = "test""#)?;
let error_message = init(default_run_opts()).unwrap_err().to_string();
assert!(
error_message.contains("A key_seed was given, but a key file already exists at",),
"{error_message}"
);
Ok(())
});
}
#[test]
fn check_server_setup_key_extraction_file_to_seed_removed_file() {
Jail::expect_with(|jail| {
jail.create_file("lldap_config.toml", "")?;
write_random_key(jail, "server_key");
let config = init(default_run_opts()).unwrap();
let info = config.get_private_key_info();
std::fs::remove_file(jail.directory().join("server_key")).unwrap();
jail.create_file("lldap_config.toml", r#"key_seed = "test""#)?;
let new_config = init(default_run_opts()).unwrap();
let error_message =
compare_private_key_hashes(Some(&info), &new_config.get_private_key_info())
.unwrap_err()
.to_string();
assert!(
error_message.contains("but it used to come from default key file",),
"{error_message}"
);
Ok(())
});
}
}

View File

@@ -1,22 +1,27 @@
use crate::{
domain::{
handler::{BackendHandler, CreateUserRequest, UpdateGroupRequest, UpdateUserRequest},
types::{GroupId, JpegPhoto, UserId},
deserialize::deserialize_attribute_value,
handler::{
AttributeList, BackendHandler, CreateAttributeRequest, CreateGroupRequest,
CreateUserRequest, UpdateGroupRequest, UpdateUserRequest,
},
types::{
AttributeName, AttributeType, AttributeValue as DomainAttributeValue, GroupId,
JpegPhoto, UserId,
},
},
infra::{
access_control::{
AdminBackendHandler, ReadonlyBackendHandler, UserReadableBackendHandler,
UserWriteableBackendHandler,
},
graphql::api::field_error_callback,
graphql::api::{field_error_callback, Context},
},
};
use anyhow::Context as AnyhowContext;
use anyhow::{anyhow, Context as AnyhowContext};
use base64::Engine;
use juniper::{graphql_object, FieldResult, GraphQLInputObject, GraphQLObject};
use tracing::{debug, debug_span, Instrument};
use super::api::Context;
use tracing::{debug, debug_span, Instrument, Span};
#[derive(PartialEq, Eq, Debug)]
/// The top-level GraphQL mutation type.
@@ -32,6 +37,21 @@ impl<Handler: BackendHandler> Mutation<Handler> {
}
}
#[derive(PartialEq, Eq, Debug, GraphQLInputObject)]
// This conflicts with the attribute values returned by the user/group queries.
#[graphql(name = "AttributeValueInput")]
struct AttributeValue {
/// The name of the attribute. It must be present in the schema, and the type informs how
/// to interpret the values.
name: String,
/// The values of the attribute.
/// If the attribute is not a list, the vector must contain exactly one element.
/// Integers (signed 64 bits) are represented as strings.
/// Dates are represented as strings in RFC3339 format, e.g. "2019-10-12T07:20:50.52Z".
/// JpegPhotos are represented as base64 encoded strings. They must be valid JPEGs.
value: Vec<String>,
}
#[derive(PartialEq, Eq, Debug, GraphQLInputObject)]
/// The details required to create a user.
pub struct CreateUserInput {
@@ -40,8 +60,18 @@ pub struct CreateUserInput {
display_name: Option<String>,
first_name: Option<String>,
last_name: Option<String>,
// Base64 encoded JpegPhoto.
/// Base64 encoded JpegPhoto.
avatar: Option<String>,
/// User-defined attributes.
attributes: Option<Vec<AttributeValue>>,
}
#[derive(PartialEq, Eq, Debug, GraphQLInputObject)]
/// The details required to create a group.
pub struct CreateGroupInput {
display_name: String,
/// User-defined attributes.
attributes: Option<Vec<AttributeValue>>,
}
#[derive(PartialEq, Eq, Debug, GraphQLInputObject)]
@@ -52,15 +82,29 @@ pub struct UpdateUserInput {
display_name: Option<String>,
first_name: Option<String>,
last_name: Option<String>,
// Base64 encoded JpegPhoto.
/// Base64 encoded JpegPhoto.
avatar: Option<String>,
/// Attribute names to remove.
/// They are processed before insertions.
remove_attributes: Option<Vec<String>>,
/// Inserts or updates the given attributes.
/// For lists, the entire list must be provided.
insert_attributes: Option<Vec<AttributeValue>>,
}
#[derive(PartialEq, Eq, Debug, GraphQLInputObject)]
/// The fields that can be updated for a group.
pub struct UpdateGroupInput {
/// The group ID.
id: i32,
/// The new display name.
display_name: Option<String>,
/// Attribute names to remove.
/// They are processed before insertions.
remove_attributes: Option<Vec<String>>,
/// Inserts or updates the given attributes.
/// For lists, the entire list must be provided.
insert_attributes: Option<Vec<AttributeValue>>,
}
#[derive(PartialEq, Eq, Debug, GraphQLObject)]
@@ -96,14 +140,22 @@ impl<Handler: BackendHandler> Mutation<Handler> {
.map(JpegPhoto::try_from)
.transpose()
.context("Provided image is not a valid JPEG")?;
let schema = handler.get_schema().await?;
let attributes = user
.attributes
.unwrap_or_default()
.into_iter()
.map(|attr| deserialize_attribute(&schema.get_schema().user_attributes, attr, true))
.collect::<Result<Vec<_>, _>>()?;
handler
.create_user(CreateUserRequest {
user_id: user_id.clone(),
email: user.email,
email: user.email.into(),
display_name: user.display_name,
first_name: user.first_name,
last_name: user.last_name,
avatar,
attributes,
})
.instrument(span.clone())
.await?;
@@ -122,15 +174,25 @@ impl<Handler: BackendHandler> Mutation<Handler> {
span.in_scope(|| {
debug!(?name);
});
let handler = context
.get_admin_handler()
.ok_or_else(field_error_callback(&span, "Unauthorized group creation"))?;
let group_id = handler.create_group(&name).await?;
Ok(handler
.get_group_details(group_id)
.instrument(span)
.await
.map(Into::into)?)
create_group_with_details(
context,
CreateGroupInput {
display_name: name,
attributes: Some(Vec::new()),
},
span,
)
.await
}
async fn create_group_with_details(
context: &Context<Handler>,
request: CreateGroupInput,
) -> FieldResult<super::query::Group<Handler>> {
let span = debug_span!("[GraphQL mutation] create_group_with_details");
span.in_scope(|| {
debug!(?request);
});
create_group_with_details(context, request, span).await
}
async fn update_user(
@@ -145,6 +207,7 @@ impl<Handler: BackendHandler> Mutation<Handler> {
let handler = context
.get_writeable_handler(&user_id)
.ok_or_else(field_error_callback(&span, "Unauthorized user update"))?;
let is_admin = context.validation_result.is_admin();
let avatar = user
.avatar
.map(|bytes| base64::engine::general_purpose::STANDARD.decode(bytes))
@@ -153,14 +216,28 @@ impl<Handler: BackendHandler> Mutation<Handler> {
.map(JpegPhoto::try_from)
.transpose()
.context("Provided image is not a valid JPEG")?;
let schema = handler.get_schema().await?;
let insert_attributes = user
.insert_attributes
.unwrap_or_default()
.into_iter()
.map(|attr| deserialize_attribute(&schema.get_schema().user_attributes, attr, is_admin))
.collect::<Result<Vec<_>, _>>()?;
handler
.update_user(UpdateUserRequest {
user_id,
email: user.email,
email: user.email.map(Into::into),
display_name: user.display_name,
first_name: user.first_name,
last_name: user.last_name,
avatar,
delete_attributes: user
.remove_attributes
.unwrap_or_default()
.into_iter()
.map(Into::into)
.collect(),
insert_attributes,
})
.instrument(span)
.await?;
@@ -178,14 +255,28 @@ impl<Handler: BackendHandler> Mutation<Handler> {
let handler = context
.get_admin_handler()
.ok_or_else(field_error_callback(&span, "Unauthorized group update"))?;
if group.id == 1 {
span.in_scope(|| debug!("Cannot change admin group details"));
return Err("Cannot change admin group details".into());
if group.id == 1 && group.display_name.is_some() {
span.in_scope(|| debug!("Cannot change lldap_admin group name"));
return Err("Cannot change lldap_admin group name".into());
}
let schema = handler.get_schema().await?;
let insert_attributes = group
.insert_attributes
.unwrap_or_default()
.into_iter()
.map(|attr| deserialize_attribute(&schema.get_schema().group_attributes, attr, true))
.collect::<Result<Vec<_>, _>>()?;
handler
.update_group(UpdateGroupRequest {
group_id: GroupId(group.id),
display_name: group.display_name,
display_name: group.display_name.map(Into::into),
delete_attributes: group
.remove_attributes
.unwrap_or_default()
.into_iter()
.map(Into::into)
.collect(),
insert_attributes,
})
.instrument(span)
.await?;
@@ -276,4 +367,183 @@ impl<Handler: BackendHandler> Mutation<Handler> {
.await?;
Ok(Success::new())
}
async fn add_user_attribute(
context: &Context<Handler>,
name: String,
attribute_type: AttributeType,
is_list: bool,
is_visible: bool,
is_editable: bool,
) -> FieldResult<Success> {
let span = debug_span!("[GraphQL mutation] add_user_attribute");
span.in_scope(|| {
debug!(?name, ?attribute_type, is_list, is_visible, is_editable);
});
let handler = context
.get_admin_handler()
.ok_or_else(field_error_callback(
&span,
"Unauthorized attribute creation",
))?;
handler
.add_user_attribute(CreateAttributeRequest {
name: name.into(),
attribute_type,
is_list,
is_visible,
is_editable,
})
.instrument(span)
.await?;
Ok(Success::new())
}
async fn add_group_attribute(
context: &Context<Handler>,
name: String,
attribute_type: AttributeType,
is_list: bool,
is_visible: bool,
is_editable: bool,
) -> FieldResult<Success> {
let span = debug_span!("[GraphQL mutation] add_group_attribute");
span.in_scope(|| {
debug!(?name, ?attribute_type, is_list, is_visible, is_editable);
});
let handler = context
.get_admin_handler()
.ok_or_else(field_error_callback(
&span,
"Unauthorized attribute creation",
))?;
handler
.add_group_attribute(CreateAttributeRequest {
name: name.into(),
attribute_type,
is_list,
is_visible,
is_editable,
})
.instrument(span)
.await?;
Ok(Success::new())
}
async fn delete_user_attribute(
context: &Context<Handler>,
name: String,
) -> FieldResult<Success> {
let span = debug_span!("[GraphQL mutation] delete_user_attribute");
let name = AttributeName::from(name);
span.in_scope(|| {
debug!(?name);
});
let handler = context
.get_admin_handler()
.ok_or_else(field_error_callback(
&span,
"Unauthorized attribute deletion",
))?;
let schema = handler.get_schema().await?;
let attribute_schema = schema
.get_schema()
.user_attributes
.get_attribute_schema(&name)
.ok_or_else(|| anyhow!("Attribute {} is not defined in the schema", &name))?;
if attribute_schema.is_hardcoded {
return Err(anyhow!("Permission denied: Attribute {} cannot be deleted", &name).into());
}
handler
.delete_user_attribute(&name)
.instrument(span)
.await?;
Ok(Success::new())
}
async fn delete_group_attribute(
context: &Context<Handler>,
name: String,
) -> FieldResult<Success> {
let span = debug_span!("[GraphQL mutation] delete_group_attribute");
let name = AttributeName::from(name);
span.in_scope(|| {
debug!(?name);
});
let handler = context
.get_admin_handler()
.ok_or_else(field_error_callback(
&span,
"Unauthorized attribute deletion",
))?;
let schema = handler.get_schema().await?;
let attribute_schema = schema
.get_schema()
.group_attributes
.get_attribute_schema(&name)
.ok_or_else(|| anyhow!("Attribute {} is not defined in the schema", &name))?;
if attribute_schema.is_hardcoded {
return Err(anyhow!("Permission denied: Attribute {} cannot be deleted", &name).into());
}
handler
.delete_group_attribute(&name)
.instrument(span)
.await?;
Ok(Success::new())
}
}
async fn create_group_with_details<Handler: BackendHandler>(
context: &Context<Handler>,
request: CreateGroupInput,
span: Span,
) -> FieldResult<super::query::Group<Handler>> {
let handler = context
.get_admin_handler()
.ok_or_else(field_error_callback(&span, "Unauthorized group creation"))?;
let schema = handler.get_schema().await?;
let attributes = request
.attributes
.unwrap_or_default()
.into_iter()
.map(|attr| deserialize_attribute(&schema.get_schema().group_attributes, attr, true))
.collect::<Result<Vec<_>, _>>()?;
let request = CreateGroupRequest {
display_name: request.display_name.into(),
attributes,
};
let group_id = handler.create_group(request).await?;
Ok(handler
.get_group_details(group_id)
.instrument(span)
.await
.map(Into::into)?)
}
fn deserialize_attribute(
attribute_schema: &AttributeList,
attribute: AttributeValue,
is_admin: bool,
) -> FieldResult<DomainAttributeValue> {
let attribute_name = AttributeName::from(attribute.name.as_str());
let attribute_schema = attribute_schema
.get_attribute_schema(&attribute_name)
.ok_or_else(|| anyhow!("Attribute {} is not defined in the schema", attribute.name))?;
if !is_admin && !attribute_schema.is_editable {
return Err(anyhow!(
"Permission denied: Attribute {} is not editable by regular users",
attribute.name
)
.into());
}
let deserialized_values = deserialize_attribute_value(
&attribute.value,
attribute_schema.attribute_type,
attribute_schema.is_list,
)
.context(format!("While deserializing attribute {}", attribute.name))?;
Ok(DomainAttributeValue {
name: attribute_name,
value: deserialized_values,
})
}

View File

@@ -1,28 +1,38 @@
use crate::{
domain::{
handler::{BackendHandler, SchemaBackendHandler},
deserialize::deserialize_attribute_value,
handler::{BackendHandler, ReadSchemaBackendHandler},
ldap::utils::{map_user_field, UserFieldType},
types::{AttributeType, GroupDetails, GroupId, JpegPhoto, UserColumn, UserId},
model::UserColumn,
schema::{
PublicSchema, SchemaAttributeExtractor, SchemaGroupAttributeExtractor,
SchemaUserAttributeExtractor,
},
types::{AttributeType, GroupDetails, GroupId, JpegPhoto, UserId},
},
infra::{
access_control::{ReadonlyBackendHandler, UserReadableBackendHandler},
graphql::api::{field_error_callback, Context},
schema::PublicSchema,
},
};
use anyhow::Context as AnyhowContext;
use chrono::{NaiveDateTime, TimeZone};
use juniper::{graphql_object, FieldError, FieldResult, GraphQLInputObject};
use serde::{Deserialize, Serialize};
use tracing::{debug, debug_span, Instrument};
use tracing::{debug, debug_span, Instrument, Span};
type DomainRequestFilter = crate::domain::handler::UserRequestFilter;
type DomainUser = crate::domain::types::User;
type DomainGroup = crate::domain::types::Group;
type DomainUserAndGroups = crate::domain::types::UserAndGroups;
type DomainSchema = crate::infra::schema::PublicSchema;
type DomainUserAndSchema = crate::domain::types::UserAndSchema;
type DomainGroupAndSchema = crate::domain::types::GroupAndSchema;
type DomainGroupDetailsAndSchema = crate::domain::types::GroupDetailsAndSchema;
type DomainUserAndGroupsAndSchema = crate::domain::types::UserAndGroupsAndSchema;
type DomainAttributeList = crate::domain::handler::AttributeList;
type DomainAttributeSchema = crate::domain::handler::AttributeSchema;
type DomainAttributeValue = crate::domain::types::AttributeValue;
type DomainAttributeValueAndSchema = crate::domain::types::AttributeValueAndSchema;
#[derive(PartialEq, Eq, Debug, GraphQLInputObject)]
/// A filter for requests, specifying a boolean expression based on field constraints. Only one of
@@ -36,73 +46,65 @@ pub struct RequestFilter {
member_of_id: Option<i32>,
}
impl TryInto<DomainRequestFilter> for RequestFilter {
type Error = String;
fn try_into(self) -> Result<DomainRequestFilter, Self::Error> {
let mut field_count = 0;
if self.any.is_some() {
field_count += 1;
}
if self.all.is_some() {
field_count += 1;
}
if self.not.is_some() {
field_count += 1;
}
if self.eq.is_some() {
field_count += 1;
}
if self.member_of.is_some() {
field_count += 1;
}
if self.member_of_id.is_some() {
field_count += 1;
}
if field_count == 0 {
return Err("No field specified in request filter".to_string());
}
if field_count > 1 {
return Err("Multiple fields specified in request filter".to_string());
}
if let Some(e) = self.eq {
return match map_user_field(&e.field.to_ascii_lowercase()) {
UserFieldType::NoMatch => Err(format!("Unknown request filter: {}", &e.field)),
UserFieldType::PrimaryField(UserColumn::UserId) => {
Ok(DomainRequestFilter::UserId(UserId::new(&e.value)))
impl RequestFilter {
fn try_into_domain_filter(self, schema: &PublicSchema) -> FieldResult<DomainRequestFilter> {
match (
self.eq,
self.any,
self.all,
self.not,
self.member_of,
self.member_of_id,
) {
(Some(eq), None, None, None, None, None) => {
match map_user_field(&eq.field.as_str().into(), schema) {
UserFieldType::NoMatch => {
Err(format!("Unknown request filter: {}", &eq.field).into())
}
UserFieldType::PrimaryField(UserColumn::UserId) => {
Ok(DomainRequestFilter::UserId(UserId::new(&eq.value)))
}
UserFieldType::PrimaryField(column) => {
Ok(DomainRequestFilter::Equality(column, eq.value))
}
UserFieldType::Attribute(name, typ, false) => {
let value = deserialize_attribute_value(&[eq.value], typ, false)
.context(format!("While deserializing attribute {}", &name))?;
Ok(DomainRequestFilter::AttributeEquality(name, value))
}
UserFieldType::Attribute(_, _, true) => {
Err("Equality not supported for list fields".into())
}
UserFieldType::MemberOf => Ok(DomainRequestFilter::MemberOf(eq.value.into())),
UserFieldType::ObjectClass | UserFieldType::Dn | UserFieldType::EntryDn => {
Err("Ldap fields not supported in request filter".into())
}
}
UserFieldType::PrimaryField(column) => {
Ok(DomainRequestFilter::Equality(column, e.value))
}
UserFieldType::Attribute(column) => Ok(DomainRequestFilter::AttributeEquality(
column.to_owned(),
e.value,
)),
};
}
(None, Some(any), None, None, None, None) => Ok(DomainRequestFilter::Or(
any.into_iter()
.map(|f| f.try_into_domain_filter(schema))
.collect::<FieldResult<Vec<_>>>()?,
)),
(None, None, Some(all), None, None, None) => Ok(DomainRequestFilter::And(
all.into_iter()
.map(|f| f.try_into_domain_filter(schema))
.collect::<FieldResult<Vec<_>>>()?,
)),
(None, None, None, Some(not), None, None) => Ok(DomainRequestFilter::Not(Box::new(
(*not).try_into_domain_filter(schema)?,
))),
(None, None, None, None, Some(group), None) => {
Ok(DomainRequestFilter::MemberOf(group.into()))
}
(None, None, None, None, None, Some(group_id)) => {
Ok(DomainRequestFilter::MemberOfId(GroupId(group_id)))
}
(None, None, None, None, None, None) => {
Err("No field specified in request filter".into())
}
_ => Err("Multiple fields specified in request filter".into()),
}
if let Some(c) = self.any {
return Ok(DomainRequestFilter::Or(
c.into_iter()
.map(TryInto::try_into)
.collect::<Result<Vec<_>, String>>()?,
));
}
if let Some(c) = self.all {
return Ok(DomainRequestFilter::And(
c.into_iter()
.map(TryInto::try_into)
.collect::<Result<Vec<_>, String>>()?,
));
}
if let Some(c) = self.not {
return Ok(DomainRequestFilter::Not(Box::new((*c).try_into()?)));
}
if let Some(group) = self.member_of {
return Ok(DomainRequestFilter::MemberOf(group));
}
if let Some(group_id) = self.member_of_id {
return Ok(DomainRequestFilter::MemberOfId(GroupId(group_id)));
}
unreachable!();
}
}
@@ -146,11 +148,15 @@ impl<Handler: BackendHandler> Query<Handler> {
&span,
"Unauthorized access to user data",
))?;
Ok(handler
let user = handler
.get_user_details(&user_id)
.instrument(span)
.await
.map(Into::into)?)
.await?;
let schema = self.get_schema(context, span).await?;
return Ok(DomainUserAndSchema {
user,
schema: schema.get_schema().user_attributes.attributes,
}.into())
}
async fn users(
@@ -167,8 +173,14 @@ impl<Handler: BackendHandler> Query<Handler> {
&span,
"Unauthorized access to user list",
))?;
let schema = self.get_schema(context, span.clone()).await?;
Ok(handler
.list_users(filters.map(TryInto::try_into).transpose()?, false)
.list_users(
filters
.map(|f| f.try_into_domain_filter(&schema))
.transpose()?,
false,
)
.instrument(span)
.await
.map(|v| v.into_iter().map(Into::into).collect())?)
@@ -209,6 +221,16 @@ impl<Handler: BackendHandler> Query<Handler> {
async fn schema(context: &Context<Handler>) -> FieldResult<Schema<Handler>> {
let span = debug_span!("[GraphQL query] get_schema");
self.get_schema(context, span).await.map(Into::into)
}
}
impl<Handler: BackendHandler> Query<Handler> {
async fn get_schema(
&self,
context: &Context<Handler>,
span: Span,
) -> FieldResult<PublicSchema> {
let handler = context
.handler
.get_user_restricted_lister_handler(&context.validation_result);
@@ -216,8 +238,7 @@ impl<Handler: BackendHandler> Query<Handler> {
.get_schema()
.instrument(span)
.await
.map(Into::<PublicSchema>::into)
.map(Into::into)?)
.map(Into::<PublicSchema>::into)?)
}
}
@@ -225,6 +246,7 @@ impl<Handler: BackendHandler> Query<Handler> {
/// Represents a single user.
pub struct User<Handler: BackendHandler> {
user: DomainUser,
schema: Vec<DomainAttributeSchema>,
_phantom: std::marker::PhantomData<Box<Handler>>,
}
@@ -233,6 +255,7 @@ impl<Handler: BackendHandler> Default for User<Handler> {
fn default() -> Self {
Self {
user: DomainUser::default(),
schema: Vec::default(),
_phantom: std::marker::PhantomData,
}
}
@@ -245,7 +268,7 @@ impl<Handler: BackendHandler> User<Handler> {
}
fn email(&self) -> &str {
&self.user.email
self.user.email.as_str()
}
fn display_name(&self) -> &str {
@@ -256,7 +279,7 @@ impl<Handler: BackendHandler> User<Handler> {
self.user
.attributes
.iter()
.find(|a| a.name == "first_name")
.find(|a| a.name.as_str() == "first_name")
.map(|a| a.value.unwrap())
.unwrap_or("")
}
@@ -265,7 +288,7 @@ impl<Handler: BackendHandler> User<Handler> {
self.user
.attributes
.iter()
.find(|a| a.name == "last_name")
.find(|a| a.name.as_str() == "last_name")
.map(|a| a.value.unwrap())
.unwrap_or("")
}
@@ -274,7 +297,7 @@ impl<Handler: BackendHandler> User<Handler> {
self.user
.attributes
.iter()
.find(|a| a.name == "avatar")
.find(|a| a.name.as_str() == "avatar")
.map(|a| String::from(&a.value.unwrap::<JpegPhoto>()))
}
@@ -320,19 +343,21 @@ impl<Handler: BackendHandler> User<Handler> {
}
}
impl<Handler: BackendHandler> From<DomainUser> for User<Handler> {
fn from(user: DomainUser) -> Self {
impl<Handler: BackendHandler> From<DomainUserAndSchema> for User<Handler> {
fn from(user: DomainUserAndSchema) -> Self {
Self {
user,
user: user.user,
schema: user.schema,
_phantom: std::marker::PhantomData,
}
}
}
impl<Handler: BackendHandler> From<DomainUserAndGroups> for User<Handler> {
fn from(user: DomainUserAndGroups) -> Self {
impl<Handler: BackendHandler> From<DomainUserAndGroupsAndSchema> for User<Handler> {
fn from(user: DomainUserAndGroupsAndSchema) -> Self {
Self {
user: user.user,
schema: user.user_schema,
_phantom: std::marker::PhantomData,
}
}
@@ -346,6 +371,7 @@ pub struct Group<Handler: BackendHandler> {
creation_date: chrono::NaiveDateTime,
uuid: String,
attributes: Vec<DomainAttributeValue>,
schema: Vec<DomainAttributeSchema>,
members: Option<Vec<String>>,
_phantom: std::marker::PhantomData<Box<Handler>>,
}
@@ -397,29 +423,31 @@ impl<Handler: BackendHandler> Group<Handler> {
}
}
impl<Handler: BackendHandler> From<GroupDetails> for Group<Handler> {
fn from(group_details: GroupDetails) -> Self {
impl<Handler: BackendHandler> From<DomainGroupDetailsAndSchema> for Group<Handler> {
fn from(group_details: DomainGroupDetailsAndSchema) -> Self {
Self {
group_id: group_details.group_id.0,
display_name: group_details.display_name,
creation_date: group_details.creation_date,
uuid: group_details.uuid.into_string(),
attributes: group_details.attributes,
group_id: group_details.group.group_id.0,
display_name: group_details.group.display_name.to_string(),
creation_date: group_details.group.creation_date,
uuid: group_details.group.uuid.into_string(),
attributes: group_details.group.attributes,
members: None,
schema: group_details.schema,
_phantom: std::marker::PhantomData,
}
}
}
impl<Handler: BackendHandler> From<DomainGroup> for Group<Handler> {
fn from(group: DomainGroup) -> Self {
impl<Handler: BackendHandler> From<DomainGroupAndSchema> for Group<Handler> {
fn from(group: DomainGroupAndSchema) -> Self {
Self {
group_id: group.id.0,
display_name: group.display_name,
creation_date: group.creation_date,
uuid: group.uuid.into_string(),
attributes: group.attributes,
members: Some(group.users.into_iter().map(UserId::into_string).collect()),
group_id: group.group.id.0,
display_name: group.group.display_name.to_string(),
creation_date: group.group.creation_date,
uuid: group.group.uuid.into_string(),
attributes: group.group.attributes,
members: Some(group.group.users.into_iter().map(UserId::into_string).collect()),
schema: group.schema,
_phantom: std::marker::PhantomData,
}
}
@@ -434,11 +462,10 @@ pub struct AttributeSchema<Handler: BackendHandler> {
#[graphql_object(context = Context<Handler>)]
impl<Handler: BackendHandler> AttributeSchema<Handler> {
fn name(&self) -> String {
self.schema.name.clone()
self.schema.name.to_string()
}
fn attribute_type(&self) -> String {
let name: &'static str = self.schema.attribute_type.into();
name.to_owned()
fn attribute_type(&self) -> AttributeType {
self.schema.attribute_type
}
fn is_list(&self) -> bool {
self.schema.is_list
@@ -492,7 +519,7 @@ impl<Handler: BackendHandler> From<DomainAttributeList> for AttributeList<Handle
#[derive(PartialEq, Eq, Debug, Serialize, Deserialize)]
pub struct Schema<Handler: BackendHandler> {
schema: DomainSchema,
schema: PublicSchema,
_phantom: std::marker::PhantomData<Box<Handler>>,
}
@@ -506,8 +533,8 @@ impl<Handler: BackendHandler> Schema<Handler> {
}
}
impl<Handler: BackendHandler> From<DomainSchema> for Schema<Handler> {
fn from(value: DomainSchema) -> Self {
impl<Handler: BackendHandler> From<PublicSchema> for Schema<Handler> {
fn from(value: PublicSchema) -> Self {
Self {
schema: value,
_phantom: std::marker::PhantomData,
@@ -515,29 +542,10 @@ impl<Handler: BackendHandler> From<DomainSchema> for Schema<Handler> {
}
}
trait SchemaAttributeExtractor: std::marker::Send {
fn get_attributes(schema: &DomainSchema) -> &DomainAttributeList;
}
struct SchemaUserAttributeExtractor;
impl SchemaAttributeExtractor for SchemaUserAttributeExtractor {
fn get_attributes(schema: &DomainSchema) -> &DomainAttributeList {
&schema.get_schema().user_attributes
}
}
struct SchemaGroupAttributeExtractor;
impl SchemaAttributeExtractor for SchemaGroupAttributeExtractor {
fn get_attributes(schema: &DomainSchema) -> &DomainAttributeList {
&schema.get_schema().group_attributes
}
}
#[derive(PartialEq, Eq, Debug, Serialize, Deserialize)]
pub struct AttributeValue<Handler: BackendHandler, Extractor> {
attribute: DomainAttributeValue,
schema: DomainAttributeSchema,
_phantom: std::marker::PhantomData<Box<Handler>>,
_phantom_extractor: std::marker::PhantomData<Extractor>,
}
@@ -547,7 +555,7 @@ impl<Handler: BackendHandler, Extractor: SchemaAttributeExtractor>
AttributeValue<Handler, Extractor>
{
fn name(&self) -> &str {
&self.attribute.name
self.attribute.name.as_str()
}
async fn value(&self, context: &Context<Handler>) -> FieldResult<Vec<String>> {
let handler = context
@@ -610,12 +618,13 @@ pub fn serialize_attribute(
.ok_or_else(|| FieldError::from(anyhow::anyhow!("Unknown attribute: {}", &attribute.name)))
}
impl<Handler: BackendHandler, Extractor> From<DomainAttributeValue>
impl<Handler: BackendHandler, Extractor> From<DomainAttributeValueAndSchema>
for AttributeValue<Handler, Extractor>
{
fn from(value: DomainAttributeValue) -> Self {
fn from(value: DomainAttributeValueAndSchema) -> Self {
Self {
attribute: value,
attribute: value.value,
schema: value.schema,
_phantom: std::marker::PhantomData,
_phantom_extractor: std::marker::PhantomData,
}
@@ -628,7 +637,7 @@ mod tests {
use crate::{
domain::{
handler::AttributeList,
types::{AttributeType, Serialized},
types::{AttributeName, AttributeType, Serialized},
},
infra::{
access_control::{Permission, ValidationResults},
@@ -686,7 +695,7 @@ mod tests {
user_attributes: DomainAttributeList {
attributes: vec![
DomainAttributeSchema {
name: "first_name".to_owned(),
name: "first_name".into(),
attribute_type: AttributeType::String,
is_list: false,
is_visible: true,
@@ -694,7 +703,7 @@ mod tests {
is_hardcoded: true,
},
DomainAttributeSchema {
name: "last_name".to_owned(),
name: "last_name".into(),
attribute_type: AttributeType::String,
is_list: false,
is_visible: true,
@@ -705,7 +714,7 @@ mod tests {
},
group_attributes: DomainAttributeList {
attributes: vec![DomainAttributeSchema {
name: "club_name".to_owned(),
name: "club_name".into(),
attribute_type: AttributeType::String,
is_list: false,
is_visible: true,
@@ -720,16 +729,16 @@ mod tests {
.return_once(|_| {
Ok(DomainUser {
user_id: UserId::new("bob"),
email: "bob@bobbers.on".to_string(),
email: "bob@bobbers.on".into(),
creation_date: chrono::Utc.timestamp_millis_opt(42).unwrap().naive_utc(),
uuid: crate::uuid!("b1a2a3a4b1b2c1c2d1d2d3d4d5d6d7d8"),
attributes: vec![
DomainAttributeValue {
name: "first_name".to_owned(),
name: "first_name".into(),
value: Serialized::from("Bob"),
},
DomainAttributeValue {
name: "last_name".to_owned(),
name: "last_name".into(),
value: Serialized::from("Bobberson"),
},
],
@@ -739,17 +748,17 @@ mod tests {
let mut groups = HashSet::new();
groups.insert(GroupDetails {
group_id: GroupId(3),
display_name: "Bobbersons".to_string(),
display_name: "Bobbersons".into(),
creation_date: chrono::Utc.timestamp_nanos(42).naive_utc(),
uuid: crate::uuid!("a1a2a3a4b1b2c1c2d1d2d3d4d5d6d7d8"),
attributes: vec![DomainAttributeValue {
name: "club_name".to_owned(),
name: "club_name".into(),
value: Serialized::from("Gang of Four"),
}],
});
groups.insert(GroupDetails {
group_id: GroupId(7),
display_name: "Jefferees".to_string(),
display_name: "Jefferees".into(),
creation_date: chrono::Utc.timestamp_nanos(12).naive_utc(),
uuid: crate::uuid!("b1a2a3a4b1b2c1c2d1d2d3d4d5d6d7d8"),
attributes: Vec::new(),
@@ -829,6 +838,7 @@ mod tests {
}"#;
let mut mock = MockTestBackendHandler::new();
setup_default_schema(&mut mock);
mock.expect_list_users()
.with(
eq(Some(DomainRequestFilter::Or(vec![
@@ -838,8 +848,8 @@ mod tests {
"robert@bobbers.on".to_owned(),
),
DomainRequestFilter::AttributeEquality(
"first_name".to_owned(),
"robert".to_owned(),
AttributeName::from("first_name"),
Serialized::from("robert"),
),
]))),
eq(false),
@@ -849,7 +859,7 @@ mod tests {
DomainUserAndGroups {
user: DomainUser {
user_id: UserId::new("bob"),
email: "bob@bobbers.on".to_owned(),
email: "bob@bobbers.on".into(),
..Default::default()
},
groups: None,
@@ -857,7 +867,7 @@ mod tests {
DomainUserAndGroups {
user: DomainUser {
user_id: UserId::new("robert"),
email: "robert@bobbers.on".to_owned(),
email: "robert@bobbers.on".into(),
..Default::default()
},
groups: None,
@@ -935,7 +945,7 @@ mod tests {
"attributes": [
{
"name": "avatar",
"attributeType": "JpegPhoto",
"attributeType": "JPEG_PHOTO",
"isList": false,
"isVisible": true,
"isEditable": true,
@@ -943,7 +953,7 @@ mod tests {
},
{
"name": "creation_date",
"attributeType": "DateTime",
"attributeType": "DATE_TIME",
"isList": false,
"isVisible": true,
"isEditable": false,
@@ -951,7 +961,7 @@ mod tests {
},
{
"name": "display_name",
"attributeType": "String",
"attributeType": "STRING",
"isList": false,
"isVisible": true,
"isEditable": true,
@@ -959,7 +969,7 @@ mod tests {
},
{
"name": "first_name",
"attributeType": "String",
"attributeType": "STRING",
"isList": false,
"isVisible": true,
"isEditable": true,
@@ -967,7 +977,7 @@ mod tests {
},
{
"name": "last_name",
"attributeType": "String",
"attributeType": "STRING",
"isList": false,
"isVisible": true,
"isEditable": true,
@@ -975,7 +985,7 @@ mod tests {
},
{
"name": "mail",
"attributeType": "String",
"attributeType": "STRING",
"isList": false,
"isVisible": true,
"isEditable": true,
@@ -983,7 +993,7 @@ mod tests {
},
{
"name": "user_id",
"attributeType": "String",
"attributeType": "STRING",
"isList": false,
"isVisible": true,
"isEditable": false,
@@ -991,7 +1001,7 @@ mod tests {
},
{
"name": "uuid",
"attributeType": "String",
"attributeType": "STRING",
"isList": false,
"isVisible": true,
"isEditable": false,
@@ -1003,7 +1013,7 @@ mod tests {
"attributes": [
{
"name": "creation_date",
"attributeType": "DateTime",
"attributeType": "DATE_TIME",
"isList": false,
"isVisible": true,
"isEditable": false,
@@ -1011,7 +1021,7 @@ mod tests {
},
{
"name": "display_name",
"attributeType": "String",
"attributeType": "STRING",
"isList": false,
"isVisible": true,
"isEditable": true,
@@ -1019,7 +1029,7 @@ mod tests {
},
{
"name": "group_id",
"attributeType": "Integer",
"attributeType": "INTEGER",
"isList": false,
"isVisible": true,
"isEditable": false,
@@ -1027,7 +1037,7 @@ mod tests {
},
{
"name": "uuid",
"attributeType": "String",
"attributeType": "STRING",
"isList": false,
"isVisible": true,
"isEditable": false,
@@ -1060,7 +1070,7 @@ mod tests {
Ok(crate::domain::handler::Schema {
user_attributes: AttributeList {
attributes: vec![crate::domain::handler::AttributeSchema {
name: "invisible".to_owned(),
name: "invisible".into(),
attribute_type: AttributeType::JpegPhoto,
is_list: false,
is_visible: false,

View File

@@ -1,7 +1,7 @@
use crate::{
domain::{
handler::{
BackendHandler, BindRequest, CreateUserRequest, LoginHandler, SchemaBackendHandler,
BackendHandler, BindRequest, CreateUserRequest, LoginHandler, ReadSchemaBackendHandler,
},
ldap::{
error::{LdapError, LdapResult},
@@ -12,7 +12,8 @@ use crate::{
},
},
opaque_handler::OpaqueHandler,
types::{Group, JpegPhoto, UserAndGroups, UserId},
schema::PublicSchema,
types::{AttributeName, Email, Group, JpegPhoto, UserAndGroups, UserId},
},
infra::access_control::{
AccessControlledBackendHandler, AdminBackendHandler, UserAndGroupListerBackendHandler,
@@ -232,8 +233,8 @@ impl<Backend: BackendHandler + LoginHandler + OpaqueHandler> LdapHandler<Backend
pub fn new(
backend_handler: AccessControlledBackendHandler<Backend>,
mut ldap_base_dn: String,
ignored_user_attributes: Vec<String>,
ignored_group_attributes: Vec<String>,
ignored_user_attributes: Vec<AttributeName>,
ignored_group_attributes: Vec<AttributeName>,
) -> Self {
ldap_base_dn.make_ascii_lowercase();
Self {
@@ -273,7 +274,14 @@ impl<Backend: BackendHandler + LoginHandler + OpaqueHandler> LdapHandler<Backend
Ok(s) => s,
Err(e) => return (LdapResultCode::NamingViolation, e.to_string()),
};
let LdapBindCred::Simple(password) = &request.cred;
let password = if let LdapBindCred::Simple(password) = &request.cred {
password
} else {
return (
LdapResultCode::UnwillingToPerform,
"SASL not supported".to_string(),
);
};
match self
.get_login_handler()
.bind(BindRequest {
@@ -298,7 +306,7 @@ impl<Backend: BackendHandler + LoginHandler + OpaqueHandler> LdapHandler<Backend
async fn change_password<B: OpaqueHandler>(
&self,
backend_handler: &B,
user: &UserId,
user: UserId,
password: &[u8],
) -> Result<()> {
use lldap_auth::*;
@@ -306,7 +314,7 @@ impl<Backend: BackendHandler + LoginHandler + OpaqueHandler> LdapHandler<Backend
let registration_start_request =
opaque::client::registration::start_registration(password, &mut rng)?;
let req = registration::ClientRegistrationStartRequest {
username: user.to_string(),
username: user.clone(),
registration_start_request: registration_start_request.message,
};
let registration_start_response = backend_handler.registration_start(req).await?;
@@ -353,7 +361,7 @@ impl<Backend: BackendHandler + LoginHandler + OpaqueHandler> LdapHandler<Backend
),
})?
.iter()
.any(|g| g.display_name == "lldap_admin");
.any(|g| g.display_name == "lldap_admin".into());
if !credentials.can_change_password(&uid, user_is_admin) {
Err(LdapError {
code: LdapResultCode::InsufficentAccessRights,
@@ -363,7 +371,7 @@ impl<Backend: BackendHandler + LoginHandler + OpaqueHandler> LdapHandler<Backend
),
})
} else if let Err(e) = self
.change_password(self.get_opaque_handler(), &uid, password.as_bytes())
.change_password(self.get_opaque_handler(), uid, password.as_bytes())
.await
{
Err(LdapError {
@@ -405,7 +413,7 @@ impl<Backend: BackendHandler + LoginHandler + OpaqueHandler> LdapHandler<Backend
async fn handle_modify_change(
&mut self,
user_id: &UserId,
user_id: UserId,
credentials: &ValidationResults,
user_is_admin: bool,
change: &LdapModify,
@@ -421,7 +429,7 @@ impl<Backend: BackendHandler + LoginHandler + OpaqueHandler> LdapHandler<Backend
),
});
}
if !credentials.can_change_password(user_id, user_is_admin) {
if !credentials.can_change_password(&user_id, user_is_admin) {
return Err(LdapError {
code: LdapResultCode::InsufficentAccessRights,
message: format!(
@@ -478,9 +486,9 @@ impl<Backend: BackendHandler + LoginHandler + OpaqueHandler> LdapHandler<Backend
message: format!("Internal error while requesting user's groups: {:#?}", e),
})?
.iter()
.any(|g| g.display_name == "lldap_admin");
.any(|g| g.display_name == "lldap_admin".into());
for change in &request.changes {
self.handle_modify_change(&uid, &credentials, user_is_admin, change)
self.handle_modify_change(uid.clone(), &credentials, user_is_admin, change)
.await?
}
Ok(vec![make_modify_response(
@@ -523,6 +531,7 @@ impl<Backend: BackendHandler + LoginHandler + OpaqueHandler> LdapHandler<Backend
&self,
backend_handler: &impl UserAndGroupListerBackendHandler,
request: &LdapSearchRequest,
schema: &PublicSchema,
) -> LdapResult<InternalSearchResults> {
let dn_parts = parse_distinguished_name(&request.base.to_ascii_lowercase())?;
let scope = get_search_scope(&self.ldap_info.base_dn, &dn_parts, &request.scope);
@@ -546,11 +555,19 @@ impl<Backend: BackendHandler + LoginHandler + OpaqueHandler> LdapHandler<Backend
need_groups,
&request.base,
backend_handler,
schema,
)
.await
});
let get_group_list = cast(|filter: &LdapFilter| async {
get_groups_list(&self.ldap_info, filter, &request.base, backend_handler).await
get_groups_list(
&self.ldap_info,
filter,
&request.base,
backend_handler,
schema,
)
.await
});
Ok(match scope {
SearchScope::Global => InternalSearchResults::UsersAndGroups(
@@ -609,12 +626,15 @@ impl<Backend: BackendHandler + LoginHandler + OpaqueHandler> LdapHandler<Backend
let backend_handler = self
.backend_handler
.get_user_restricted_lister_handler(user_info);
let search_results = self.do_search_internal(&backend_handler, request).await?;
let schema = backend_handler.get_schema().await.map_err(|e| LdapError {
code: LdapResultCode::OperationsError,
message: format!("Unable to get schema: {:#}", e),
})?;
let schema =
PublicSchema::from(backend_handler.get_schema().await.map_err(|e| LdapError {
code: LdapResultCode::OperationsError,
message: format!("Unable to get schema: {:#}", e),
})?);
let search_results = self
.do_search_internal(&backend_handler, request, &schema)
.await?;
let mut results = match search_results {
InternalSearchResults::UsersAndGroups(users, groups) => {
convert_users_to_ldap_op(users, &request.attrs, &self.ldap_info, &schema)
@@ -623,6 +643,7 @@ impl<Backend: BackendHandler + LoginHandler + OpaqueHandler> LdapHandler<Backend
&request.attrs,
&self.ldap_info,
&backend_handler.user_filter,
&schema,
))
.collect()
}
@@ -692,10 +713,12 @@ impl<Backend: BackendHandler + LoginHandler + OpaqueHandler> LdapHandler<Backend
backend_handler
.create_user(CreateUserRequest {
user_id,
email: get_attribute("mail")
.or_else(|| get_attribute("email"))
.transpose()?
.unwrap_or_default(),
email: Email::from(
get_attribute("mail")
.or_else(|| get_attribute("email"))
.transpose()?
.unwrap_or_default(),
),
display_name: get_attribute("cn").transpose()?,
first_name: get_attribute("givenname").transpose()?,
last_name: get_attribute("sn").transpose()?,
@@ -708,6 +731,7 @@ impl<Backend: BackendHandler + LoginHandler + OpaqueHandler> LdapHandler<Backend
code: LdapResultCode::ConstraintViolation,
message: format!("Invalid JPEG photo: {:#?}", e),
})?,
..Default::default()
})
.await
.map_err(|e| LdapError {
@@ -853,7 +877,7 @@ mod tests {
let mut set = HashSet::new();
set.insert(GroupDetails {
group_id: GroupId(42),
display_name: group,
display_name: group.into(),
creation_date: chrono::Utc.timestamp_opt(42, 42).unwrap().naive_utc(),
uuid: uuid!("a1a2a3a4b1b2c1c2d1d2d3d4d5d6d7d8"),
attributes: Vec::new(),
@@ -940,7 +964,7 @@ mod tests {
let mut set = HashSet::new();
set.insert(GroupDetails {
group_id: GroupId(42),
display_name: "lldap_admin".to_string(),
display_name: "lldap_admin".into(),
creation_date: chrono::Utc.timestamp_opt(42, 42).unwrap().naive_utc(),
uuid: uuid!("a1a2a3a4b1b2c1c2d1d2d3d4d5d6d7d8"),
attributes: Vec::new(),
@@ -1027,7 +1051,7 @@ mod tests {
},
groups: Some(vec![GroupDetails {
group_id: GroupId(42),
display_name: "rockstars".to_string(),
display_name: "rockstars".into(),
creation_date: chrono::Utc.timestamp_opt(42, 42).unwrap().naive_utc(),
uuid: uuid!("a1a2a3a4b1b2c1c2d1d2d3d4d5d6d7d8"),
attributes: Vec::new(),
@@ -1175,16 +1199,16 @@ mod tests {
UserAndGroups {
user: User {
user_id: UserId::new("bob_1"),
email: "bob@bobmail.bob".to_string(),
email: "bob@bobmail.bob".into(),
display_name: Some("Bôb Böbberson".to_string()),
uuid: uuid!("698e1d5f-7a40-3151-8745-b9b8a37839da"),
attributes: vec![
AttributeValue {
name: "first_name".to_owned(),
name: "first_name".into(),
value: Serialized::from("Bôb"),
},
AttributeValue {
name: "last_name".to_owned(),
name: "last_name".into(),
value: Serialized::from("Böbberson"),
},
],
@@ -1195,19 +1219,19 @@ mod tests {
UserAndGroups {
user: User {
user_id: UserId::new("jim"),
email: "jim@cricket.jim".to_string(),
email: "jim@cricket.jim".into(),
display_name: Some("Jimminy Cricket".to_string()),
attributes: vec![
AttributeValue {
name: "avatar".to_owned(),
name: "avatar".into(),
value: Serialized::from(&JpegPhoto::for_tests()),
},
AttributeValue {
name: "first_name".to_owned(),
name: "first_name".into(),
value: Serialized::from("Jim"),
},
AttributeValue {
name: "last_name".to_owned(),
name: "last_name".into(),
value: Serialized::from("Cricket"),
},
],
@@ -1343,7 +1367,7 @@ mod tests {
Ok(vec![
Group {
id: GroupId(1),
display_name: "group_1".to_string(),
display_name: "group_1".into(),
creation_date: chrono::Utc.timestamp_opt(42, 42).unwrap().naive_utc(),
users: vec![UserId::new("bob"), UserId::new("john")],
uuid: uuid!("04ac75e0-2900-3e21-926c-2f732c26b3fc"),
@@ -1351,7 +1375,7 @@ mod tests {
},
Group {
id: GroupId(3),
display_name: "BestGroup".to_string(),
display_name: "BestGroup".into(),
creation_date: chrono::Utc.timestamp_opt(42, 42).unwrap().naive_utc(),
users: vec![UserId::new("john")],
uuid: uuid!("04ac75e0-2900-3e21-926c-2f732c26b3fc"),
@@ -1362,7 +1386,14 @@ mod tests {
let mut ldap_handler = setup_bound_admin_handler(mock).await;
let request = make_group_search_request(
LdapFilter::And(vec![]),
vec!["objectClass", "dn", "cn", "uniqueMember", "entryUuid"],
vec![
"objectClass",
"dn",
"cn",
"uniqueMember",
"entryUuid",
"entryDN",
],
);
assert_eq!(
ldap_handler.do_search_or_dse(&request).await,
@@ -1389,6 +1420,10 @@ mod tests {
atype: "entryUuid".to_string(),
vals: vec![b"04ac75e0-2900-3e21-926c-2f732c26b3fc".to_vec()],
},
LdapPartialAttribute {
atype: "entryDN".to_string(),
vals: vec![b"uid=group_1,ou=groups,dc=example,dc=com".to_vec()],
},
],
}),
LdapOp::SearchResultEntry(LdapSearchResultEntry {
@@ -1410,6 +1445,10 @@ mod tests {
atype: "entryUuid".to_string(),
vals: vec![b"04ac75e0-2900-3e21-926c-2f732c26b3fc".to_vec()],
},
LdapPartialAttribute {
atype: "entryDN".to_string(),
vals: vec![b"uid=BestGroup,ou=groups,dc=example,dc=com".to_vec()],
},
],
}),
make_search_success(),
@@ -1422,9 +1461,9 @@ mod tests {
let mut mock = MockTestBackendHandler::new();
mock.expect_list_groups()
.with(eq(Some(GroupRequestFilter::And(vec![
GroupRequestFilter::DisplayName("group_1".to_string()),
GroupRequestFilter::DisplayName("group_1".into()),
GroupRequestFilter::Member(UserId::new("bob")),
GroupRequestFilter::DisplayName("rockstars".to_string()),
GroupRequestFilter::DisplayName("rockstars".into()),
false.into(),
GroupRequestFilter::Uuid(uuid!("04ac75e0-2900-3e21-926c-2f732c26b3fc")),
true.into(),
@@ -1442,7 +1481,7 @@ mod tests {
.times(1)
.return_once(|_| {
Ok(vec![Group {
display_name: "group_1".to_string(),
display_name: "group_1".into(),
id: GroupId(1),
creation_date: chrono::Utc.timestamp_opt(42, 42).unwrap().naive_utc(),
users: vec![],
@@ -1507,13 +1546,13 @@ mod tests {
mock.expect_list_groups()
.with(eq(Some(GroupRequestFilter::Or(vec![
GroupRequestFilter::Not(Box::new(GroupRequestFilter::DisplayName(
"group_2".to_string(),
"group_2".into(),
))),
]))))
.times(1)
.return_once(|_| {
Ok(vec![Group {
display_name: "group_1".to_string(),
display_name: "group_1".into(),
id: GroupId(1),
creation_date: chrono::Utc.timestamp_opt(42, 42).unwrap().naive_utc(),
users: vec![],
@@ -1550,7 +1589,7 @@ mod tests {
mock.expect_list_groups()
.with(eq(Some(GroupRequestFilter::And(vec![
true.into(),
GroupRequestFilter::DisplayName("rockstars".to_string()),
GroupRequestFilter::DisplayName("rockstars".into()),
]))))
.times(1)
.return_once(|_| Ok(vec![]));
@@ -1574,7 +1613,7 @@ mod tests {
#[tokio::test]
async fn test_search_groups_unsupported_substring() {
let mut ldap_handler = setup_bound_admin_handler(MockTestBackendHandler::new()).await;
let mut ldap_handler = setup_bound_readonly_handler(MockTestBackendHandler::new()).await;
let request = make_group_search_request(
LdapFilter::Substring("member".to_owned(), LdapSubstringFilter::default()),
vec!["cn"],
@@ -1588,13 +1627,31 @@ mod tests {
);
}
#[tokio::test]
async fn test_search_groups_missing_attribute_substring() {
let request = make_group_search_request(
LdapFilter::Substring("nonexistent".to_owned(), LdapSubstringFilter::default()),
vec!["cn"],
);
let mut mock = MockTestBackendHandler::new();
mock.expect_list_groups()
.with(eq(Some(false.into())))
.times(1)
.return_once(|_| Ok(vec![]));
let mut ldap_handler = setup_bound_readonly_handler(mock).await;
assert_eq!(
ldap_handler.do_search_or_dse(&request).await,
Ok(vec![make_search_success()]),
);
}
#[tokio::test]
async fn test_search_groups_error() {
let mut mock = MockTestBackendHandler::new();
mock.expect_list_groups()
.with(eq(Some(GroupRequestFilter::Or(vec![
GroupRequestFilter::Not(Box::new(GroupRequestFilter::DisplayName(
"group_2".to_string(),
"group_2".into(),
))),
]))))
.times(1)
@@ -1657,8 +1714,8 @@ mod tests {
true.into(),
false.into(),
UserRequestFilter::AttributeEquality(
"first_name".to_owned(),
"firstname".to_owned(),
AttributeName::from("first_name"),
Serialized::from("firstname"),
),
false.into(),
UserRequestFilter::UserIdSubString(SubStringFilter {
@@ -1761,7 +1818,7 @@ mod tests {
let mut mock = MockTestBackendHandler::new();
mock.expect_list_users()
.with(
eq(Some(UserRequestFilter::MemberOf("group_1".to_string()))),
eq(Some(UserRequestFilter::MemberOf("group_1".into()))),
eq(false),
)
.times(1)
@@ -1861,15 +1918,15 @@ mod tests {
Ok(vec![UserAndGroups {
user: User {
user_id: UserId::new("bob_1"),
email: "bob@bobmail.bob".to_string(),
email: "bob@bobmail.bob".into(),
display_name: Some("Bôb Böbberson".to_string()),
attributes: vec![
AttributeValue {
name: "first_name".to_owned(),
name: "first_name".into(),
value: Serialized::from("Bôb"),
},
AttributeValue {
name: "last_name".to_owned(),
name: "last_name".into(),
value: Serialized::from("Böbberson"),
},
],
@@ -1884,7 +1941,7 @@ mod tests {
.return_once(|_| {
Ok(vec![Group {
id: GroupId(1),
display_name: "group_1".to_string(),
display_name: "group_1".into(),
creation_date: chrono::Utc.timestamp_opt(42, 42).unwrap().naive_utc(),
users: vec![UserId::new("bob"), UserId::new("john")],
uuid: uuid!("04ac75e0-2900-3e21-926c-2f732c26b3fc"),
@@ -1944,15 +2001,15 @@ mod tests {
Ok(vec![UserAndGroups {
user: User {
user_id: UserId::new("bob_1"),
email: "bob@bobmail.bob".to_string(),
email: "bob@bobmail.bob".into(),
display_name: Some("Bôb Böbberson".to_string()),
attributes: vec![
AttributeValue {
name: "avatar".to_owned(),
name: "avatar".into(),
value: Serialized::from(&JpegPhoto::for_tests()),
},
AttributeValue {
name: "last_name".to_owned(),
name: "last_name".into(),
value: Serialized::from("Böbberson"),
},
],
@@ -1967,7 +2024,7 @@ mod tests {
.returning(|_| {
Ok(vec![Group {
id: GroupId(1),
display_name: "group_1".to_string(),
display_name: "group_1".into(),
creation_date: chrono::Utc.timestamp_opt(42, 42).unwrap().naive_utc(),
users: vec![UserId::new("bob"), UserId::new("john")],
uuid: uuid!("04ac75e0-2900-3e21-926c-2f732c26b3fc"),
@@ -2160,7 +2217,7 @@ mod tests {
opaque::client::registration::start_registration("password".as_bytes(), &mut rng)
.unwrap();
let request = registration::ClientRegistrationStartRequest {
username: "bob".to_string(),
username: "bob".into(),
registration_start_request: registration_start_request.message,
};
let start_response = opaque::server::registration::start_registration(
@@ -2208,7 +2265,7 @@ mod tests {
opaque::client::registration::start_registration("password".as_bytes(), &mut rng)
.unwrap();
let request = registration::ClientRegistrationStartRequest {
username: "bob".to_string(),
username: "bob".into(),
registration_start_request: registration_start_request.message,
};
let start_response = opaque::server::registration::start_registration(
@@ -2258,7 +2315,7 @@ mod tests {
opaque::client::registration::start_registration("password".as_bytes(), &mut rng)
.unwrap();
let request = registration::ClientRegistrationStartRequest {
username: "bob".to_string(),
username: "bob".into(),
registration_start_request: registration_start_request.message,
};
let start_response = opaque::server::registration::start_registration(
@@ -2350,7 +2407,7 @@ mod tests {
let mut groups = HashSet::new();
groups.insert(GroupDetails {
group_id: GroupId(0),
display_name: "lldap_admin".to_string(),
display_name: "lldap_admin".into(),
creation_date: chrono::Utc.timestamp_opt(42, 42).unwrap().naive_utc(),
uuid: uuid!("a1a2a3a4b1b2c1c2d1d2d3d4d5d6d7d8"),
attributes: Vec::new(),
@@ -2430,7 +2487,7 @@ mod tests {
mock.expect_create_user()
.with(eq(CreateUserRequest {
user_id: UserId::new("bob"),
email: "".to_owned(),
email: "".into(),
display_name: Some("Bob".to_string()),
..Default::default()
}))
@@ -2472,7 +2529,7 @@ mod tests {
mock.expect_create_user()
.with(eq(CreateUserRequest {
user_id: UserId::new("bob"),
email: "".to_owned(),
email: "".into(),
display_name: Some("Bob".to_string()),
..Default::default()
}))
@@ -2529,7 +2586,7 @@ mod tests {
Ok(vec![UserAndGroups {
user: User {
user_id: UserId::new("bob"),
email: "bob@bobmail.bob".to_string(),
email: "bob@bobmail.bob".into(),
..Default::default()
},
groups: None,
@@ -2574,10 +2631,10 @@ mod tests {
let mut mock = MockTestBackendHandler::new();
mock.expect_list_users().returning(|_, _| Ok(vec![]));
mock.expect_list_groups().returning(|f| {
assert_eq!(f, Some(GroupRequestFilter::DisplayName("group".to_owned())));
assert_eq!(f, Some(GroupRequestFilter::DisplayName("group".into())));
Ok(vec![Group {
id: GroupId(1),
display_name: "group".to_string(),
display_name: "group".into(),
creation_date: chrono::Utc.timestamp_opt(42, 42).unwrap().naive_utc(),
users: vec![UserId::new("bob")],
uuid: uuid!("04ac75e0-2900-3e21-926c-2f732c26b3fc"),
@@ -2638,7 +2695,7 @@ mod tests {
Ok(vec![UserAndGroups {
user: User {
user_id: UserId::new("bob"),
email: "bob@bobmail.bob".to_string(),
email: "bob@bobmail.bob".into(),
..Default::default()
},
groups: None,
@@ -2668,10 +2725,10 @@ mod tests {
let mut mock = MockTestBackendHandler::new();
mock.expect_list_users().returning(|_, _| Ok(vec![]));
mock.expect_list_groups().returning(|f| {
assert_eq!(f, Some(GroupRequestFilter::DisplayName("group".to_owned())));
assert_eq!(f, Some(GroupRequestFilter::DisplayName("group".into())));
Ok(vec![Group {
id: GroupId(1),
display_name: "group".to_string(),
display_name: "group".into(),
creation_date: chrono::Utc.timestamp_opt(42, 42).unwrap().naive_utc(),
users: vec![UserId::new("bob")],
uuid: uuid!("04ac75e0-2900-3e21-926c-2f732c26b3fc"),
@@ -2723,4 +2780,98 @@ mod tests {
])
);
}
#[tokio::test]
async fn test_custom_attribute_read() {
let mut mock = MockTestBackendHandler::new();
mock.expect_list_users().times(1).return_once(|_, _| {
Ok(vec![UserAndGroups {
user: User {
user_id: UserId::new("test"),
attributes: vec![AttributeValue {
name: "nickname".into(),
value: Serialized::from("Bob the Builder"),
}],
..Default::default()
},
groups: None,
}])
});
mock.expect_list_groups().times(1).return_once(|_| {
Ok(vec![Group {
id: GroupId(1),
display_name: "group".into(),
creation_date: chrono::Utc.timestamp_opt(42, 42).unwrap().naive_utc(),
users: vec![UserId::new("bob")],
uuid: uuid!("04ac75e0-2900-3e21-926c-2f732c26b3fc"),
attributes: vec![AttributeValue {
name: "club_name".into(),
value: Serialized::from("Breakfast Club"),
}],
}])
});
mock.expect_get_schema().returning(|| {
Ok(crate::domain::handler::Schema {
user_attributes: AttributeList {
attributes: vec![AttributeSchema {
name: "nickname".into(),
attribute_type: AttributeType::String,
is_list: false,
is_visible: true,
is_editable: true,
is_hardcoded: false,
}],
},
group_attributes: AttributeList {
attributes: vec![AttributeSchema {
name: "club_name".into(),
attribute_type: AttributeType::String,
is_list: false,
is_visible: true,
is_editable: true,
is_hardcoded: false,
}],
},
})
});
let mut ldap_handler = setup_bound_readonly_handler(mock).await;
let request = make_search_request(
"dc=example,dc=com",
LdapFilter::And(vec![]),
vec!["uid", "nickname", "club_name"],
);
assert_eq!(
ldap_handler.do_search_or_dse(&request).await,
Ok(vec![
LdapOp::SearchResultEntry(LdapSearchResultEntry {
dn: "uid=test,ou=people,dc=example,dc=com".to_string(),
attributes: vec![
LdapPartialAttribute {
atype: "uid".to_owned(),
vals: vec![b"test".to_vec()],
},
LdapPartialAttribute {
atype: "nickname".to_owned(),
vals: vec![b"Bob the Builder".to_vec()],
},
],
}),
LdapOp::SearchResultEntry(LdapSearchResultEntry {
dn: "cn=group,ou=groups,dc=example,dc=com".to_owned(),
attributes: vec![
LdapPartialAttribute {
atype: "uid".to_owned(),
vals: vec![b"group".to_vec()],
},
LdapPartialAttribute {
atype: "club_name".to_owned(),
vals: vec![b"Breakfast Club".to_vec()],
},
],
}),
make_search_success()
]),
);
}
}

View File

@@ -2,6 +2,7 @@ use crate::{
domain::{
handler::{BackendHandler, LoginHandler},
opaque_handler::OpaqueHandler,
types::AttributeName,
},
infra::{
access_control::AccessControlledBackendHandler,
@@ -13,7 +14,7 @@ use actix_rt::net::TcpStream;
use actix_server::ServerBuilder;
use actix_service::{fn_service, ServiceFactoryExt};
use anyhow::{anyhow, Context, Result};
use ldap3_proto::{proto::LdapMsg, LdapCodec};
use ldap3_proto::{control::LdapControl, proto::LdapMsg, proto::LdapOp, LdapCodec};
use rustls::PrivateKey;
use tokio_rustls::TlsAcceptor as RustlsTlsAcceptor;
use tokio_util::codec::{FramedRead, FramedWrite};
@@ -39,12 +40,21 @@ where
if result.is_empty() {
debug!("No response");
}
let results: i64 = result.len().try_into().unwrap();
for response in result.into_iter() {
debug!(?response);
let controls = if matches!(response, LdapOp::SearchResultDone(_)) {
vec![LdapControl::SimplePagedResults {
size: results - 1, // Avoid counting SearchResultDone as a result
cookie: vec![],
}]
} else {
vec![]
};
resp.send(LdapMsg {
msgid: msg.msgid,
op: response,
ctrl: vec![],
ctrl: controls,
})
.await
.context("while sending a response: {:#}")?
@@ -63,8 +73,8 @@ async fn handle_ldap_stream<Stream, Backend>(
stream: Stream,
backend_handler: Backend,
ldap_base_dn: String,
ignored_user_attributes: Vec<String>,
ignored_group_attributes: Vec<String>,
ignored_user_attributes: Vec<AttributeName>,
ignored_group_attributes: Vec<AttributeName>,
) -> Result<Stream>
where
Backend: BackendHandler + LoginHandler + OpaqueHandler + 'static,

View File

@@ -10,7 +10,6 @@ pub mod ldap_handler;
pub mod ldap_server;
pub mod logging;
pub mod mail;
pub mod schema;
pub mod sql_backend_handler;
pub mod tcp_backend_handler;
pub mod tcp_server;

View File

@@ -1,104 +0,0 @@
use crate::domain::{
handler::{AttributeSchema, Schema},
types::AttributeType,
};
use serde::{Deserialize, Serialize};
#[derive(PartialEq, Eq, Debug, Serialize, Deserialize)]
pub struct PublicSchema(Schema);
impl PublicSchema {
pub fn get_schema(&self) -> &Schema {
&self.0
}
}
impl From<Schema> for PublicSchema {
fn from(mut schema: Schema) -> Self {
schema.user_attributes.attributes.extend_from_slice(&[
AttributeSchema {
name: "user_id".to_owned(),
attribute_type: AttributeType::String,
is_list: false,
is_visible: true,
is_editable: false,
is_hardcoded: true,
},
AttributeSchema {
name: "creation_date".to_owned(),
attribute_type: AttributeType::DateTime,
is_list: false,
is_visible: true,
is_editable: false,
is_hardcoded: true,
},
AttributeSchema {
name: "mail".to_owned(),
attribute_type: AttributeType::String,
is_list: false,
is_visible: true,
is_editable: true,
is_hardcoded: true,
},
AttributeSchema {
name: "uuid".to_owned(),
attribute_type: AttributeType::String,
is_list: false,
is_visible: true,
is_editable: false,
is_hardcoded: true,
},
AttributeSchema {
name: "display_name".to_owned(),
attribute_type: AttributeType::String,
is_list: false,
is_visible: true,
is_editable: true,
is_hardcoded: true,
},
]);
schema
.user_attributes
.attributes
.sort_by(|a, b| a.name.cmp(&b.name));
schema.group_attributes.attributes.extend_from_slice(&[
AttributeSchema {
name: "group_id".to_owned(),
attribute_type: AttributeType::Integer,
is_list: false,
is_visible: true,
is_editable: false,
is_hardcoded: true,
},
AttributeSchema {
name: "creation_date".to_owned(),
attribute_type: AttributeType::DateTime,
is_list: false,
is_visible: true,
is_editable: false,
is_hardcoded: true,
},
AttributeSchema {
name: "uuid".to_owned(),
attribute_type: AttributeType::String,
is_list: false,
is_visible: true,
is_editable: false,
is_hardcoded: true,
},
AttributeSchema {
name: "display_name".to_owned(),
attribute_type: AttributeType::String,
is_list: false,
is_visible: true,
is_editable: true,
is_hardcoded: true,
},
]);
schema
.group_attributes
.attributes
.sort_by(|a, b| a.name.cmp(&b.name));
PublicSchema(schema)
}
}

View File

@@ -6,6 +6,7 @@ use crate::domain::{
types::UserId,
};
use async_trait::async_trait;
use chrono::NaiveDateTime;
use sea_orm::{
sea_query::{Cond, Expr},
ActiveModelTrait, ColumnTrait, EntityTrait, IntoActiveModel, QueryFilter, QuerySelect,
@@ -62,6 +63,25 @@ impl TcpBackendHandler for SqlBackendHandler {
Ok((refresh_token, duration))
}
#[instrument(skip_all, level = "debug")]
async fn register_jwt(
&self,
user: &UserId,
jwt_hash: u64,
expiry_date: NaiveDateTime,
) -> Result<()> {
debug!(?user, ?jwt_hash);
let new_token = model::jwt_storage::Model {
jwt_hash: jwt_hash as i64,
user_id: user.clone(),
blacklisted: false,
expiry_date,
}
.into_active_model();
new_token.insert(&self.sql_pool).await?;
Ok(())
}
#[instrument(skip_all, level = "debug")]
async fn check_token(&self, refresh_token_hash: u64, user: &UserId) -> Result<bool> {
debug!(?user);

View File

@@ -1,4 +1,5 @@
use async_trait::async_trait;
use chrono::NaiveDateTime;
use std::collections::HashSet;
use crate::domain::{error::Result, types::UserId};
@@ -7,6 +8,12 @@ use crate::domain::{error::Result, types::UserId};
pub trait TcpBackendHandler: Sync {
async fn get_jwt_blacklist(&self) -> anyhow::Result<HashSet<u64>>;
async fn create_refresh_token(&self, user: &UserId) -> Result<(String, chrono::Duration)>;
async fn register_jwt(
&self,
user: &UserId,
jwt_hash: u64,
expiry_date: NaiveDateTime,
) -> Result<()>;
async fn check_token(&self, refresh_token_hash: u64, user: &UserId) -> Result<bool>;
async fn blacklist_jwts(&self, user: &UserId) -> Result<HashSet<u64>>;
async fn delete_refresh_token(&self, refresh_token_hash: u64) -> Result<()>;

View File

@@ -12,7 +12,7 @@ use crate::{
tcp_backend_handler::*,
},
};
use actix_files::{Files, NamedFile};
use actix_files::Files;
use actix_http::{header, HttpServiceBuilder};
use actix_server::ServerBuilder;
use actix_service::map_config;
@@ -21,13 +21,22 @@ use anyhow::{Context, Result};
use hmac::Hmac;
use sha2::Sha512;
use std::collections::HashSet;
use std::path::PathBuf;
use std::sync::RwLock;
use tracing::info;
async fn index() -> actix_web::Result<NamedFile> {
let path = PathBuf::from(r"app/index.html");
Ok(NamedFile::open(path)?)
async fn index<Backend>(data: web::Data<AppState<Backend>>) -> actix_web::Result<impl Responder> {
let mut file = std::fs::read_to_string(r"./app/index.html")?;
if data.server_url.path() != "/" {
file = file.replace(
"<base href=\"/\">",
format!("<base href=\"{}/\">", data.server_url.path()).as_str(),
);
}
Ok(file
.customize()
.insert_header((header::CONTENT_TYPE, "text/html; charset=utf-8")))
}
#[derive(thiserror::Error, Debug)]
@@ -68,6 +77,20 @@ pub(crate) fn error_to_http_response(error: TcpError) -> HttpResponse {
.body(error.to_string())
}
async fn main_js_handler<Backend>(
data: web::Data<AppState<Backend>>,
) -> actix_web::Result<impl Responder> {
let mut file = std::fs::read_to_string(r"./app/static/main.js")?;
if data.server_url.path() != "/" {
file = file.replace("/pkg/", format!("{}/pkg/", data.server_url.path()).as_str());
}
Ok(file
.customize()
.insert_header((header::CONTENT_TYPE, "text/javascript")))
}
async fn wasm_handler() -> actix_web::Result<impl Responder> {
Ok(actix_files::NamedFile::open_async("./app/pkg/lldap_app_bg.wasm").await?)
}
@@ -118,6 +141,7 @@ fn http_config<Backend>(
web::resource("/pkg/lldap_app_bg.wasm.gz").route(web::route().to(wasm_handler_compressed)),
)
.service(web::resource("/pkg/lldap_app_bg.wasm").route(web::route().to(wasm_handler)))
.service(web::resource("/static/main.js").route(web::route().to(main_js_handler::<Backend>)))
// Serve the /pkg path with the compiled WASM app.
.service(Files::new("/pkg", "./app/pkg"))
// Serve static files
@@ -125,7 +149,7 @@ fn http_config<Backend>(
// Serve static fonts
.service(Files::new("/static/fonts", "./app/static/fonts"))
// Default to serve index.html for unknown routes, to support routing.
.default_service(web::route().guard(guard::Get()).to(index));
.default_service(web::route().guard(guard::Get()).to(index::<Backend>));
}
pub(crate) struct AppState<Backend> {

View File

@@ -20,7 +20,7 @@ mockall::mock! {
impl GroupBackendHandler for TestBackendHandler {
async fn get_group_details(&self, group_id: GroupId) -> Result<GroupDetails>;
async fn update_group(&self, request: UpdateGroupRequest) -> Result<()>;
async fn create_group(&self, group_name: &str) -> Result<GroupId>;
async fn create_group(&self, request: CreateGroupRequest) -> Result<GroupId>;
async fn delete_group(&self, group_id: GroupId) -> Result<()>;
}
#[async_trait]
@@ -38,10 +38,17 @@ mockall::mock! {
async fn remove_user_from_group(&self, user_id: &UserId, group_id: GroupId) -> Result<()>;
}
#[async_trait]
impl SchemaBackendHandler for TestBackendHandler {
impl ReadSchemaBackendHandler for TestBackendHandler {
async fn get_schema(&self) -> Result<Schema>;
}
#[async_trait]
impl SchemaBackendHandler for TestBackendHandler {
async fn add_user_attribute(&self, request: CreateAttributeRequest) -> Result<()>;
async fn add_group_attribute(&self, request: CreateAttributeRequest) -> Result<()>;
async fn delete_user_attribute(&self, name: &AttributeName) -> Result<()>;
async fn delete_group_attribute(&self, name: &AttributeName) -> Result<()>;
}
#[async_trait]
impl BackendHandler for TestBackendHandler {}
#[async_trait]
impl OpaqueHandler for TestBackendHandler {
@@ -67,7 +74,7 @@ pub fn setup_default_schema(mock: &mut MockTestBackendHandler) {
user_attributes: AttributeList {
attributes: vec![
AttributeSchema {
name: "avatar".to_owned(),
name: "avatar".into(),
attribute_type: AttributeType::JpegPhoto,
is_list: false,
is_visible: true,
@@ -75,7 +82,7 @@ pub fn setup_default_schema(mock: &mut MockTestBackendHandler) {
is_hardcoded: true,
},
AttributeSchema {
name: "first_name".to_owned(),
name: "first_name".into(),
attribute_type: AttributeType::String,
is_list: false,
is_visible: true,
@@ -83,7 +90,7 @@ pub fn setup_default_schema(mock: &mut MockTestBackendHandler) {
is_hardcoded: true,
},
AttributeSchema {
name: "last_name".to_owned(),
name: "last_name".into(),
attribute_type: AttributeType::String,
is_list: false,
is_visible: true,

View File

@@ -8,17 +8,23 @@ use std::time::Duration;
use crate::{
domain::{
handler::{
CreateUserRequest, GroupBackendHandler, GroupListerBackendHandler, GroupRequestFilter,
UserBackendHandler, UserListerBackendHandler, UserRequestFilter,
CreateGroupRequest, CreateUserRequest, GroupBackendHandler, GroupListerBackendHandler,
GroupRequestFilter, UserBackendHandler, UserListerBackendHandler, UserRequestFilter,
},
sql_backend_handler::SqlBackendHandler,
sql_opaque_handler::register_password,
sql_tables::{get_private_key_info, set_private_key_info},
},
infra::{
cli::*,
configuration::{compare_private_key_hashes, Configuration},
db_cleaner::Scheduler,
healthcheck, mail,
},
infra::{cli::*, configuration::Configuration, db_cleaner::Scheduler, healthcheck, mail},
};
use actix::Actor;
use actix_server::ServerBuilder;
use anyhow::{anyhow, Context, Result};
use anyhow::{anyhow, bail, Context, Result};
use futures_util::TryFutureExt;
use sea_orm::Database;
use tracing::*;
@@ -36,17 +42,17 @@ async fn create_admin_user(handler: &SqlBackendHandler, config: &Configuration)
handler
.create_user(CreateUserRequest {
user_id: config.ldap_user_dn.clone(),
email: config.ldap_user_email.clone(),
email: config.ldap_user_email.clone().into(),
display_name: Some("Administrator".to_string()),
..Default::default()
})
.and_then(|_| register_password(handler, &config.ldap_user_dn, &config.ldap_user_pass))
.and_then(|_| {
register_password(handler, config.ldap_user_dn.clone(), &config.ldap_user_pass)
})
.await
.context("Error creating admin user")?;
let groups = handler
.list_groups(Some(GroupRequestFilter::DisplayName(
"lldap_admin".to_owned(),
)))
.list_groups(Some(GroupRequestFilter::DisplayName("lldap_admin".into())))
.await?;
assert_eq!(groups.len(), 1);
handler
@@ -57,13 +63,16 @@ async fn create_admin_user(handler: &SqlBackendHandler, config: &Configuration)
async fn ensure_group_exists(handler: &SqlBackendHandler, group_name: &str) -> Result<()> {
if handler
.list_groups(Some(GroupRequestFilter::DisplayName(group_name.to_owned())))
.list_groups(Some(GroupRequestFilter::DisplayName(group_name.into())))
.await?
.is_empty()
{
warn!("Could not find {} group, trying to create it", group_name);
handler
.create_group(group_name)
.create_group(CreateGroupRequest {
display_name: group_name.into(),
..Default::default()
})
.await
.context(format!("while creating {} group", group_name))?;
}
@@ -85,13 +94,33 @@ async fn set_up_server(config: Configuration) -> Result<ServerBuilder> {
domain::sql_tables::init_table(&sql_pool)
.await
.context("while creating the tables")?;
let private_key_info = config.get_private_key_info();
let force_update_private_key = config.force_update_private_key;
match (
compare_private_key_hashes(
get_private_key_info(&sql_pool).await?.as_ref(),
&private_key_info,
),
force_update_private_key,
) {
(Ok(false), true) => {
bail!("The private key has not changed, but force_update_private_key/LLDAP_FORCE_UPDATE_PRIVATE_KEY is set to true. Please set force_update_private_key to false and restart the server.");
}
(Ok(true), _) | (Err(_), true) => {
set_private_key_info(&sql_pool, private_key_info).await?;
}
(Ok(false), false) => {}
(Err(e), false) => {
return Err(anyhow!("The private key encoding the passwords has changed since last successful startup. Changing the private key will invalidate all existing passwords. If you want to proceed, restart the server with the CLI arg --force-update-private-key=true or the env variable LLDAP_FORCE_UPDATE_PRIVATE_KEY=true. You probably also want --force-ldap-user-pass-reset / LLDAP_FORCE_LDAP_USER_PASS_RESET=true to reset the admin password to the value in the configuration.").context(e));
}
}
let backend_handler = SqlBackendHandler::new(config.clone(), sql_pool.clone());
ensure_group_exists(&backend_handler, "lldap_admin").await?;
ensure_group_exists(&backend_handler, "lldap_password_manager").await?;
ensure_group_exists(&backend_handler, "lldap_strict_readonly").await?;
let admin_present = if let Ok(admins) = backend_handler
.list_users(
Some(UserRequestFilter::MemberOf("lldap_admin".to_owned())),
Some(UserRequestFilter::MemberOf("lldap_admin".into())),
false,
)
.await
@@ -106,6 +135,21 @@ async fn set_up_server(config: Configuration) -> Result<ServerBuilder> {
.await
.map_err(|e| anyhow!("Error setting up admin login/account: {:#}", e))
.context("while creating the admin user")?;
} else if config.force_ldap_user_pass_reset {
warn!("Forcing admin password reset to the config-provided password");
register_password(
&backend_handler,
config.ldap_user_dn.clone(),
&config.ldap_user_pass,
)
.await
.context(format!(
"while resetting admin password for {}",
&config.ldap_user_dn
))?;
}
if config.force_update_private_key || config.force_ldap_user_pass_reset {
bail!("Restart the server without --force-update-private-key or --force-ldap-user-pass-reset to continue.");
}
let server_builder = infra::ldap_server::build_ldap_server(
&config,
@@ -140,9 +184,13 @@ fn run_server_command(opts: RunOpts) -> Result<()> {
let config = infra::configuration::init(opts)?;
infra::logging::init(&config)?;
actix::run(
run_server(config).unwrap_or_else(|e| error!("Could not bring up the servers: {:#}", e)),
)?;
use std::sync::{Arc, Mutex};
let result = Arc::new(Mutex::new(Ok(())));
let result_async = Arc::clone(&result);
actix::run(run_server(config).unwrap_or_else(move |e| *result_async.lock().unwrap() = Err(e)))?;
if let Err(e) = result.lock().unwrap().as_ref() {
anyhow::bail!(format!("Could not set up servers: {:#}", e));
}
info!("End.");
Ok(())

View File

@@ -10,7 +10,7 @@ pub fn get_token(client: &Client) -> String {
.header(reqwest::header::CONTENT_TYPE, "application/json")
.body(
serde_json::to_string(&lldap_auth::login::ClientSimpleLoginRequest {
username,
username: username.into(),
password,
})
.expect("Failed to encode the username/password as json to log in"),

View File

@@ -108,6 +108,7 @@ impl LLDAPFixture {
display_name: None,
first_name: None,
last_name: None,
attributes: None,
},
},
)

View File

@@ -47,7 +47,7 @@ fn gitea() {
let mut found_users: HashSet<String> = HashSet::new();
for result in results {
let attrs = SearchEntry::construct(result).attrs;
let user = attrs.get("uid").unwrap().get(0).unwrap();
let user = attrs.get("uid").unwrap().first().unwrap();
found_users.insert(user.clone());
}
assert!(found_users.contains(&gitea_user1));

View File

@@ -102,7 +102,7 @@ fn get_users_and_groups(results: SearchResult) -> HashMap<String, HashSet<String
let mut found_users: HashMap<String, HashSet<String>> = HashMap::new();
for result in results {
let attrs = SearchEntry::construct(result).attrs;
let user = attrs.get("uid").unwrap().get(0).unwrap();
let user = attrs.get("uid").unwrap().first().unwrap();
let user_groups = attrs.get("memberof").unwrap().clone();
let mut groups: HashSet<String> = HashSet::new();
groups.extend(user_groups.clone());

View File

@@ -30,6 +30,10 @@ pub struct CliOpts {
/// New password for the user.
#[clap(short, long)]
pub password: String,
/// Bypass password requirements such as minimum length. Unsafe.
#[clap(long)]
pub bypass_password_policy: bool,
}
fn append_to_url(base_url: &Url, path: &str) -> Url {
@@ -45,7 +49,7 @@ fn get_token(base_url: &Url, username: &str, password: &str) -> Result<String> {
.header(reqwest::header::CONTENT_TYPE, "application/json")
.body(
serde_json::to_string(&lldap_auth::login::ClientSimpleLoginRequest {
username: username.to_string(),
username: username.into(),
password: password.to_string(),
})
.expect("Failed to encode the username/password as json to log in"),
@@ -97,7 +101,7 @@ pub fn register_finish(
fn main() -> Result<()> {
let opts = CliOpts::parse();
ensure!(
opts.password.len() >= 8,
opts.bypass_password_policy || opts.password.len() >= 8,
"New password is too short, expected at least 8 characters"
);
ensure!(
@@ -117,7 +121,7 @@ fn main() -> Result<()> {
opaque::client::registration::start_registration(opts.password.as_bytes(), &mut rng)
.context("Could not initiate password change")?;
let start_request = registration::ClientRegistrationStartRequest {
username: opts.username.to_string(),
username: opts.username.clone().into(),
registration_start_request: registration_start_request.message,
};
let res = register_start(&opts.base_url, &token, start_request)?;