Compare commits

...

102 Commits
v0.1 ... master

Author SHA1 Message Date
Profitroll c3a9a2f40a Added missing import of SearchLimitInvalidError 2023-12-14 01:22:03 +02:00
Profitroll bcf74089f9 Fix of a fix in exceptions 2023-12-14 01:20:42 +02:00
Profitroll 25c902c194 Fixed the wrong exceptions being imported 2023-12-14 01:09:04 +02:00
Profitroll 5129cb449e Merge pull request 'Quotas, new secrets, upgrades' (#36) from dev into master
Reviewed-on: #36
2023-11-25 20:12:43 +02:00
Profitroll 4d6efac3c4 Merge branch 'dev' 2023-11-25 19:12:05 +01:00
Profitroll 88b820e90d Merge branch 'master' into dev 2023-11-25 20:00:52 +02:00
Profitroll afefea6f68
Fixed "TypeError" for UserInDB 2023-11-25 18:17:17 +01:00
Profitroll e5fad5ba92
Fixed "unrecognized arguments" error 2023-11-25 18:14:30 +01:00
Profitroll 5174602c31
Added upgrade section 2023-11-25 18:12:01 +01:00
Profitroll 0043abdbad
Migration for quotas added 2023-11-25 18:05:12 +01:00
Profitroll 0f423166f1
New secrets system and quotas (#35) 2023-11-25 17:50:09 +01:00
Profitroll b2146b965a Fixed license link
Signed-off-by: Profitroll <profitroll@noreply.localhost>
2023-11-24 12:19:32 +02:00
Profitroll 3aa171869b Update dependency fastapi to v0.104.1 (#34) 2023-10-31 11:07:53 +02:00
Renovate 126c66637e Update dependency fastapi to v0.104.1 2023-10-30 12:18:40 +02:00
Profitroll d0d127d9c0 Merge pull request 'Update dependency fastapi to v0.104.0' (#33) from renovate/fastapi-0.x into dev
Reviewed-on: #33
2023-10-18 21:55:59 +03:00
Renovate 728917b4b9 Update dependency fastapi to v0.104.0 2023-10-18 16:08:54 +03:00
Profitroll b1eb8f9aac Merge pull request 'Update dependency fastapi to v0.103.2' (#32) from renovate/fastapi-0.x into dev
Reviewed-on: #32
2023-09-29 17:43:13 +03:00
Renovate 0a30512dbc Update dependency fastapi to v0.103.2 2023-09-28 23:38:38 +03:00
Profitroll 14b09d7062 Merge pull request 'Update dependency opencv-python to ~=4.8.1.78' (#31) from renovate/opencv-python-4.x into dev
Reviewed-on: #31
2023-09-28 23:28:25 +03:00
Renovate ac8f2b2ba6 Update dependency opencv-python to ~=4.8.1.78 2023-09-28 14:20:54 +03:00
Profitroll eab19e6783 Merge pull request 'Update dependency fastapi to v0.103.1' (#30) from renovate/fastapi-0.x into dev
Reviewed-on: #30
2023-09-04 22:40:37 +03:00
Renovate 8347a4c779 Update dependency fastapi to v0.103.1 2023-09-02 20:40:38 +03:00
Profitroll ec5d0585a2 Merge pull request 'Update dependency fastapi to v0.103.0' (#29) from renovate/fastapi-0.x into dev
Reviewed-on: #29
2023-08-26 23:03:06 +03:00
Renovate ee53a77691 Update dependency fastapi to v0.103.0 2023-08-26 22:09:12 +03:00
Profitroll 10ee56be9e Merge pull request 'Update dependency fastapi to v0.102.0' (#28) from renovate/fastapi-0.x into dev
Reviewed-on: #28
2023-08-25 23:22:13 +03:00
Renovate 91d5032fd2 Update dependency fastapi to v0.102.0 2023-08-25 22:30:40 +03:00
Profitroll 3569de9363
Added pymongo as a direct dependency 2023-08-14 14:26:54 +02:00
Profitroll c966a6de07
Fixed direction errors 2023-08-14 13:55:49 +02:00
Profitroll 7011baff0f
Added db_client_sync 2023-08-14 13:51:18 +02:00
Profitroll a1acaed6dd
WIP: Migration to async_pymongo 2023-08-14 13:44:07 +02:00
Profitroll 80ec8eb4f3 Merge pull request 'Update dependency fastapi to v0.101.1' (#26) from renovate/fastapi-0.x into dev
Reviewed-on: #26
2023-08-14 13:35:26 +03:00
Renovate bcc7012744 Update dependency fastapi to v0.101.1 2023-08-14 13:12:49 +03:00
Profitroll e3038e4224 Merge pull request 'Update dependency aiofiles to v23.2.1' (#25) from renovate/aiofiles-23.x into dev
Reviewed-on: #25
2023-08-09 23:20:30 +03:00
Renovate 3b4d108d45 Update dependency aiofiles to v23.2.1 2023-08-09 18:50:11 +03:00
Profitroll 16fe8235f4 Merge pull request 'Update dependency fastapi to v0.101.0' (#24) from renovate/fastapi-0.x into dev
Reviewed-on: #24
2023-08-05 13:38:45 +03:00
Renovate 6cc0d3814e Update dependency fastapi to v0.101.0 2023-08-05 01:06:50 +03:00
Profitroll b0c46e0c1e Merge pull request 'Update dependency fastapi to v0.100.1' (#23) from renovate/fastapi-0.x into dev
Reviewed-on: #23
2023-07-28 09:33:38 +03:00
Renovate 7c725bf04a Update dependency fastapi to v0.100.1 2023-07-27 23:20:36 +03:00
Profitroll cff6ed17a7 Merge pull request 'Update dependency pymongo to v4.4.1' (#22) from renovate/pymongo-4.x into dev
Reviewed-on: #22
2023-07-15 15:23:30 +03:00
Renovate e6fae57679 Update dependency pymongo to v4.4.1 2023-07-14 16:02:07 +03:00
Profitroll dfdfebe155 Merge pull request 'Update dependency fastapi to v0.100.0' (#21) from renovate/fastapi-0.x into dev
Reviewed-on: #21
2023-07-07 22:51:12 +03:00
Renovate 01b6222f6b Update dependency fastapi to v0.100.0 2023-07-07 21:00:24 +03:00
Profitroll 10fb021162 Merge pull request 'Update dependency fastapi to v0.99.1' (#20) from renovate/fastapi-0.x into dev
Reviewed-on: #20
2023-07-03 11:37:38 +03:00
Renovate 4545e26f32 Update dependency fastapi to v0.99.1 2023-07-02 19:30:19 +03:00
Profitroll ab2bfd10d5 Merge pull request 'Update dependency opencv-python to ~=4.8.0.74' (#19) from renovate/opencv-python-4.x into dev
Reviewed-on: #19
2023-06-30 19:06:25 +03:00
Renovate e9f3237fbb Update dependency opencv-python to ~=4.8.0.74 2023-06-30 15:53:36 +03:00
Profitroll 1bcca0f812 Merge pull request 'Random media requests' (#18) from dev into master
Reviewed-on: #18
2023-06-27 14:54:28 +03:00
Profitroll b3c9a972c8 Merge branch 'master' into dev 2023-06-27 14:54:21 +03:00
Profitroll 42f125716a
Updated to v0.5 2023-06-27 13:52:13 +02:00
Profitroll 5e3df74052
Added random photo/video request 2023-06-27 13:51:18 +02:00
Profitroll 2ff4623d5f Merge pull request 'Update dependency scipy to ~=1.11.0' (#17) from renovate/scipy-1.x into dev
Reviewed-on: #17
2023-06-26 11:49:40 +03:00
Renovate 737b4c57c0 Update dependency scipy to ~=1.11.0 2023-06-25 21:47:06 +03:00
Profitroll d723bb6b80 Merge branch 'master' of https://git.profitroll.eu/profitroll/PhotosAPI 2023-06-23 12:17:05 +02:00
Profitroll 2a7870620c
Refactor changed are done 2023-06-23 12:17:01 +02:00
Profitroll b003712358
Fixed path error 2023-06-23 12:09:36 +02:00
Profitroll d29dfa4d3e Merge pull request 'Update FastAPI to 0.98.0' (#16) from dev into master
Reviewed-on: #16
2023-06-23 12:32:46 +03:00
Profitroll d688d766da Merge branch 'master' into dev 2023-06-23 12:31:10 +03:00
Profitroll 5cc10367b2
Typo fixed 2023-06-23 11:30:18 +02:00
Profitroll 4b43e76822
logWrite replaced with logging module 2023-06-23 11:25:27 +02:00
Profitroll 23467a88ef
pathlib support 2023-06-23 11:17:02 +02:00
Profitroll 88d8a38444
WIP: pathlib support 2023-06-23 08:51:42 +00:00
Profitroll a5cd6a215f
Fixed missing await 2023-06-23 07:40:37 +00:00
Profitroll a6002a5e60 Merge pull request 'Update dependency fastapi to v0.98.0' (#14) from renovate/fastapi-0.x into dev
Reviewed-on: #14
2023-06-22 22:12:09 +03:00
Renovate 917048a333 Update dependency fastapi to v0.98.0 2023-06-22 21:40:59 +03:00
Profitroll 6be51c5aaa Merge pull request 'Fixed OpenAPI specs' (#13) from dev into master
Reviewed-on: #13
2023-06-22 15:44:39 +03:00
Profitroll 840e3022b3 Merge branch 'master' into dev 2023-06-22 15:44:35 +03:00
Profitroll 24f4773dd7
Updated to v0.4 2023-06-22 14:43:15 +02:00
Profitroll 00d3d62762
Fixed openapi spec 2023-06-22 14:43:00 +02:00
Profitroll 2a29b85ad2 Merge pull request 'Updated to v0.3' (#12) from dev into master
Reviewed-on: #12
2023-06-22 15:01:58 +03:00
Profitroll 9bdc788078
Updated to v0.3 2023-06-22 14:01:12 +02:00
Profitroll 5a5103ea9c Merge pull request 'Fixes and cleanups' (#11) from dev into master
Reviewed-on: #11
2023-06-22 14:52:22 +03:00
Profitroll ccf4c43bb9 Merge branch 'dev' of https://git.profitroll.eu/profitroll/PhotosAPI into dev 2023-06-22 13:51:13 +02:00
Profitroll 19e0531a24
Fixed mime types of photo/video get 2023-06-22 13:51:04 +02:00
Profitroll b46f3fb0fd
Imports cleanup 2023-06-22 13:26:01 +02:00
Profitroll d2f3d7e687
Added aiofiles to requirements 2023-06-22 13:25:36 +02:00
Profitroll 83dd4b6746
Sorted imports with isort 2023-06-22 13:17:53 +02:00
Profitroll 47435c6128
Async I/O implemented 2023-06-22 13:16:12 +02:00
Profitroll db77f62459
Fixed incorrect exceptions codes 2023-06-22 13:06:10 +02:00
Profitroll b51026b200 Merge pull request 'Updated dependencies and Python version' (#10) from dev into master
Reviewed-on: #10
2023-06-22 13:51:44 +03:00
Profitroll b30547eca8 Merge branch 'master' into dev 2023-06-22 13:51:38 +03:00
Profitroll 782b489db2 Merge pull request 'Update dependency pymongo to v4.4.0' (#9) from renovate/pymongo-4.x into dev
Reviewed-on: #9
2023-06-21 22:45:10 +03:00
Renovate d085a0e639 Update dependency pymongo to v4.4.0 2023-06-21 20:58:06 +03:00
Profitroll 30d72c84ed Merge pull request 'Update dependency fastapi to v0.97.0' (#8) from renovate/fastapi-0.x into dev
Reviewed-on: #8
2023-06-14 11:48:37 +03:00
Renovate e1e42fdb60 Update dependency fastapi to v0.97.0 2023-06-14 11:41:39 +03:00
Profitroll 36169b0e77 Python min version is now 3.8
Due to a bump of ujson to 5.8.0, version of Python supported is risen to 3.8
2023-06-11 12:28:32 +03:00
Profitroll 5de935cd21 Merge pull request 'Update dependency ujson to ~=5.8.0' (#7) from renovate/ujson-5.x into dev
Reviewed-on: #7
2023-06-11 12:26:39 +03:00
Renovate 1e6afc6b0c Update dependency ujson to ~=5.8.0 2023-06-11 12:18:53 +03:00
Profitroll f9e6ee9c72 Merge pull request 'Update dependency fastapi to v0.96.1' (#6) from renovate/fastapi-0.x into dev
Reviewed-on: #6
2023-06-11 09:48:18 +03:00
Renovate f512df408f Update dependency fastapi to v0.96.1 2023-06-11 01:55:54 +03:00
Profitroll aa083811dc Merge pull request 'Update dependency fastapi to v0.96.0' (#5) from renovate/fastapi-0.x into dev
Reviewed-on: #5
2023-06-03 17:58:32 +03:00
Renovate 4d24696d3d Update dependency fastapi to v0.96.0 2023-06-03 17:45:29 +03:00
Profitroll c7cb4a6dff Merge pull request 'Update dependency fastapi to v0.95.2' (#4) from renovate/fastapi-0.x into dev
Reviewed-on: #4
2023-05-16 18:46:18 +03:00
Renovate 4060aae038 Update dependency fastapi to v0.95.2 2023-05-16 16:48:51 +03:00
Profitroll 4eea82a160 Merge pull request 'Update dependency fastapi to v0.95.1' (#2) from renovate/fastapi-0.x into dev
Reviewed-on: #2
2023-04-21 10:15:15 +03:00
Renovate 4ce4264580 Update dependency fastapi to v0.95.1 2023-04-21 10:11:31 +03:00
Profitroll 6feed4359a Update '.renovaterc' 2023-04-21 10:03:26 +03:00
Profitroll 2afc82cf01 Add '.re' 2023-04-21 10:03:10 +03:00
Profitroll bf0046c3d5 Update '.renovaterc' 2023-04-21 10:02:18 +03:00
Profitroll c55a2d0d44 Add '.ren' 2023-04-21 10:01:52 +03:00
Profitroll a380da81bb Updated API version to 0.2 2023-03-23 12:34:31 +01:00
Profitroll e858e7d7f4 Changed token search logic 2023-03-23 12:34:18 +01:00
Profitroll fcbbd4f2bf Bump FastAPI to 0.95.0 and exif to 1.6.0 2023-03-23 10:57:15 +01:00
25 changed files with 942 additions and 486 deletions

3
.gitignore vendored
View File

@ -153,5 +153,6 @@ cython_debug/
#.idea/
# Custom
.vscode
data/
.vscode/
config.json

20
.renovaterc Normal file
View File

@ -0,0 +1,20 @@
{
"$schema": "https://docs.renovatebot.com/renovate-schema.json",
"extends": [
"config:base"
],
"baseBranches": [
"dev"
],
"packageRules": [
{
"matchUpdateTypes": [
"minor",
"patch",
"pin",
"digest"
],
"automerge": true
}
]
}

View File

@ -1,7 +1,7 @@
<h1 align="center">Photos API</h1>
<p align="center">
<a href="https://git.end-play.xyz/profitroll/PhotosAPILICENSE"><img alt="License: GPL" src="https://img.shields.io/badge/License-GPL-blue"></a>
<a href="https://git.end-play.xyz/profitroll/PhotosAPI/src/branch/master/README.md"><img alt="License: GPL" src="https://img.shields.io/badge/License-GPL-blue"></a>
<a href="https://git.end-play.xyz/profitroll/PhotosAPI"><img alt="Code style: black" src="https://img.shields.io/badge/code%20style-black-000000.svg"></a>
</p>
@ -9,7 +9,7 @@ Small and simple API server for saving photos and videos.
## Dependencies
* [Python 3.7+](https://www.python.org) (3.9+ recommended)
* [Python 3.8+](https://www.python.org) (3.9+ recommended)
* [MongoDB](https://www.mongodb.com)
* [exiftool](https://exiftool.org)
* [jpegoptim](https://github.com/tjko/jpegoptim)
@ -47,7 +47,8 @@ First you need to have a Python interpreter, MongoDB and optionally git. You can
1. Copy file `config_example.json` to `config.json`
2. Open `config.json` using your favorite text editor. For example `nano config.json`
3. Change `"database"` keys to match your MongoDB setup
4. Change `"external_address"` to the ip/http address you may get in responses. By default it's `"localhost"`. This is extremely useful when running behind reverse-proxy.
4. Set the key `"secret"` to your JWT secret. You can type in anything, but long secrets are recommended. You can also set environment variable `PHOTOSAPI_SECRET` as an alternative
5. Change `"external_address"` to the ip/http address you may get in responses. By default it's `"localhost"`. This is extremely useful when running behind reverse-proxy.
After configuring everything listed above your API will be able to boot, however further configuration can be done. You can read about it in [repository's wiki](https://git.end-play.xyz/profitroll/PhotosAPI/wiki/Configuration). There's no need to focus on that now, it makes more sense to configure it afterwards.
@ -58,6 +59,19 @@ First you need to have a Python interpreter, MongoDB and optionally git. You can
Learn more about available uvicorn arguments using `uvicorn --help`
## Upgrading
When a new version comes out, sometimes you want to upgrade your instance right away. Here's a checklist what to do:
1. Carefully read the patch notes of the version you want to update to and all the versions that came out between the release of your version and the one you want to upgrade to.
Breaking changes will be marked so and config updates will also be described in the patch notes
2. Make a backup of your currently working instance. This includes both the PhotosAPI and the database
3. Download the latest version using git (`git pull` if you cloned the repo in the past) or from the releases
4. Reconfigure the config if needed and apply the changes from the patch notes
5. Upgrade the dependencies in your virtual environment using `pip install -r requirements.txt`
6. Start the migration using `python photos_api.py --migrate` from your virtual environment
7. Test if everything works and troubleshoot/rollback if not
## Using as a service
It's a good practice to use your API as a systemd service on Linux. Here's a quick overview how that can be done.

View File

@ -1,7 +1,9 @@
from typing import Literal
from fastapi import HTTPException
class AlbumNotFoundError(Exception):
class AlbumNotFoundError(HTTPException):
"""Raises HTTP 404 if no album with this ID found."""
def __init__(self, id: str):
@ -16,7 +18,7 @@ class AlbumNotFoundError(Exception):
}
class AlbumNameNotFoundError(Exception):
class AlbumNameNotFoundError(HTTPException):
"""Raises HTTP 404 if no album with this name found."""
def __init__(self, name: str):
@ -29,9 +31,15 @@ class AlbumNameNotFoundError(Exception):
}
},
}
super().__init__(
status_code=404,
detail=self.openapi["content"]["application/json"]["example"][
"detail"
].format(name=self.name),
)
class AlbumAlreadyExistsError(Exception):
class AlbumAlreadyExistsError(HTTPException):
"""Raises HTTP 409 if album with this name already exists."""
def __init__(self, name: str):
@ -44,9 +52,15 @@ class AlbumAlreadyExistsError(Exception):
}
},
}
super().__init__(
status_code=409,
detail=self.openapi["content"]["application/json"]["example"][
"detail"
].format(name=self.name),
)
class AlbumIncorrectError(Exception):
class AlbumIncorrectError(HTTPException):
"""Raises HTTP 406 if album's title or name is invalid."""
def __init__(self, place: Literal["name", "title"], error: str) -> None:
@ -56,13 +70,19 @@ class AlbumIncorrectError(Exception):
"description": "Album Name/Title Invalid",
"content": {
"application/json": {
"example": {"detail": "Album {name/title} invalid: {error}"}
"example": {"detail": "Album {place} invalid: {error}"}
}
},
}
super().__init__(
status_code=406,
detail=self.openapi["content"]["application/json"]["example"][
"detail"
].format(place=self.place, error=self.error),
)
class PhotoNotFoundError(Exception):
class PhotoNotFoundError(HTTPException):
"""Raises HTTP 404 if no photo with this ID found."""
def __init__(self, id: str):
@ -75,9 +95,15 @@ class PhotoNotFoundError(Exception):
}
},
}
super().__init__(
status_code=404,
detail=self.openapi["content"]["application/json"]["example"][
"detail"
].format(id=self.id),
)
class PhotoSearchQueryEmptyError(Exception):
class PhotoSearchQueryEmptyError(HTTPException):
"""Raises HTTP 422 if no photo search query provided."""
def __init__(self):
@ -91,9 +117,13 @@ class PhotoSearchQueryEmptyError(Exception):
}
},
}
super().__init__(
status_code=422,
detail=self.openapi["content"]["application/json"]["example"]["detail"],
)
class VideoNotFoundError(Exception):
class VideoNotFoundError(HTTPException):
"""Raises HTTP 404 if no video with this ID found."""
def __init__(self, id: str):
@ -106,9 +136,15 @@ class VideoNotFoundError(Exception):
}
},
}
super().__init__(
status_code=404,
detail=self.openapi["content"]["application/json"]["example"][
"detail"
].format(id=self.id),
)
class VideoSearchQueryEmptyError(Exception):
class VideoSearchQueryEmptyError(HTTPException):
"""Raises HTTP 422 if no video search query provided."""
def __init__(self):
@ -122,9 +158,33 @@ class VideoSearchQueryEmptyError(Exception):
}
},
}
super().__init__(
status_code=422,
detail=self.openapi["content"]["application/json"]["example"]["detail"],
)
class SearchPageInvalidError(Exception):
class SearchLimitInvalidError(HTTPException):
"""Raises HTTP 400 if search results limit not in valid range."""
def __init__(self):
self.openapi = {
"description": "Invalid Limit",
"content": {
"application/json": {
"example": {
"detail": "Parameter 'limit' must be greater or equal to 1."
}
}
},
}
super().__init__(
status_code=400,
detail=self.openapi["content"]["application/json"]["example"]["detail"],
)
class SearchPageInvalidError(HTTPException):
"""Raises HTTP 400 if page or page size are not in valid range."""
def __init__(self):
@ -138,9 +198,13 @@ class SearchPageInvalidError(Exception):
}
},
}
super().__init__(
status_code=400,
detail=self.openapi["content"]["application/json"]["example"]["detail"],
)
class SearchTokenInvalidError(Exception):
class SearchTokenInvalidError(HTTPException):
"""Raises HTTP 401 if search token is not valid."""
def __init__(self):
@ -150,9 +214,13 @@ class SearchTokenInvalidError(Exception):
"application/json": {"example": {"detail": "Invalid search token."}}
},
}
super().__init__(
status_code=401,
detail=self.openapi["content"]["application/json"]["example"]["detail"],
)
class UserEmailCodeInvalid(Exception):
class UserEmailCodeInvalid(HTTPException):
"""Raises HTTP 400 if email confirmation code is not valid."""
def __init__(self):
@ -164,9 +232,13 @@ class UserEmailCodeInvalid(Exception):
}
},
}
super().__init__(
status_code=400,
detail=self.openapi["content"]["application/json"]["example"]["detail"],
)
class UserAlreadyExists(Exception):
class UserAlreadyExists(HTTPException):
"""Raises HTTP 409 if user with this name already exists."""
def __init__(self):
@ -178,9 +250,13 @@ class UserAlreadyExists(Exception):
}
},
}
super().__init__(
status_code=409,
detail=self.openapi["content"]["application/json"]["example"]["detail"],
)
class AccessTokenInvalidError(Exception):
class AccessTokenInvalidError(HTTPException):
"""Raises HTTP 401 if access token is not valid."""
def __init__(self):
@ -190,9 +266,13 @@ class AccessTokenInvalidError(Exception):
"application/json": {"example": {"detail": "Invalid access token."}}
},
}
super().__init__(
status_code=401,
detail=self.openapi["content"]["application/json"]["example"]["detail"],
)
class UserCredentialsInvalid(Exception):
class UserCredentialsInvalid(HTTPException):
"""Raises HTTP 401 if user credentials are not valid."""
def __init__(self):
@ -202,3 +282,27 @@ class UserCredentialsInvalid(Exception):
"application/json": {"example": {"detail": "Invalid credentials."}}
},
}
super().__init__(
status_code=401,
detail=self.openapi["content"]["application/json"]["example"]["detail"],
)
class UserMediaQuotaReached(HTTPException):
"""Raises HTTP 403 if user's quota has been reached."""
def __init__(self):
self.openapi = {
"description": "Media Quota Reached",
"content": {
"application/json": {
"example": {
"detail": "Media quota has been reached, media upload impossible."
}
}
},
}
super().__init__(
status_code=403,
detail=self.openapi["content"]["application/json"]["example"]["detail"],
)

View File

@ -1,4 +1,5 @@
from typing import List, Union
from pydantic import BaseModel
@ -71,3 +72,11 @@ class SearchResultsPhoto(BaseModel):
class SearchResultsVideo(BaseModel):
results: List[VideoSearch]
next_page: Union[str, None]
class RandomSearchResultsPhoto(BaseModel):
results: List[PhotoSearch]
class RandomSearchResultsVideo(BaseModel):
results: List[VideoSearch]

View File

@ -6,6 +6,7 @@
"user": null,
"password": null
},
"secret": "",
"messages": {
"email_confirmed": "Email confirmed. You can now log in."
},
@ -14,6 +15,7 @@
"media_token_valid_hours": 12,
"registration_enabled": true,
"registration_requires_confirmation": false,
"default_user_quota": 10000,
"mailer": {
"smtp": {
"host": "",

View File

@ -1,5 +1,6 @@
import re
from os import makedirs, path, rename
from os import makedirs, rename
from pathlib import Path
from shutil import rmtree
from typing import Union
@ -46,14 +47,12 @@ async def album_create(
if 2 > len(title) > 40:
raise AlbumIncorrectError("title", "must be >2 and <40 characters.")
if col_albums.find_one({"name": name}) is not None:
if (await col_albums.find_one({"name": name})) is not None:
raise AlbumAlreadyExistsError(name)
makedirs(
path.join("data", "users", current_user.user, "albums", name), exist_ok=True
)
makedirs(Path(f"data/users/{current_user.user}/albums/{name}"), exist_ok=True)
uploaded = col_albums.insert_one(
uploaded = await col_albums.insert_one(
{"user": current_user.user, "name": name, "title": title, "cover": None}
)
@ -68,9 +67,10 @@ async def album_find(
current_user: User = Security(get_current_active_user, scopes=["albums.list"]),
):
output = {"results": []}
albums = list(col_albums.find({"user": current_user.user, "name": re.compile(q)}))
for album in albums:
async for album in col_albums.find(
{"user": current_user.user, "name": re.compile(q)}
):
output["results"].append(
{
"id": album["_id"].__str__(),
@ -103,18 +103,18 @@ async def album_patch(
current_user: User = Security(get_current_active_user, scopes=["albums.write"]),
):
try:
album = col_albums.find_one({"_id": ObjectId(id)})
album = await col_albums.find_one({"_id": ObjectId(id)})
if album is None:
raise InvalidId(id)
except InvalidId:
raise AlbumNotFoundError(id)
except InvalidId as exc:
raise AlbumNotFoundError(id) from exc
if title is not None:
if 2 > len(title) > 40:
raise AlbumIncorrectError("title", "must be >2 and <40 characters.")
else:
if title is None:
title = album["title"]
elif 2 > len(title) > 40:
raise AlbumIncorrectError("title", "must be >2 and <40 characters.")
if name is not None:
if re.search(re.compile("^[a-z,0-9,_]*$"), name) is False:
raise AlbumIncorrectError(
@ -123,10 +123,10 @@ async def album_patch(
if 2 > len(name) > 20:
raise AlbumIncorrectError("name", "must be >2 and <20 characters.")
rename(
path.join("data", "users", current_user.user, "albums", album["name"]),
path.join("data", "users", current_user.user, "albums", name),
Path(f"data/users/{current_user.user}/albums/{album['name']}"),
Path(f"data/users/{current_user.user}/albums/{name}"),
)
col_photos.update_many(
await col_photos.update_many(
{"user": current_user.user, "album": album["name"]},
{"$set": {"album": name}},
)
@ -134,12 +134,14 @@ async def album_patch(
name = album["name"]
if cover is not None:
image = col_photos.find_one({"_id": ObjectId(cover), "album": album["name"]})
image = await col_photos.find_one(
{"_id": ObjectId(cover), "album": album["name"]}
)
cover = image["_id"].__str__() if image is not None else album["cover"]
else:
cover = album["cover"]
col_albums.update_one(
await col_albums.update_one(
{"_id": ObjectId(id)}, {"$set": {"name": name, "title": title, "cover": cover}}
)
@ -167,11 +169,11 @@ async def album_put(
current_user: User = Security(get_current_active_user, scopes=["albums.write"]),
):
try:
album = col_albums.find_one({"_id": ObjectId(id)})
album = await col_albums.find_one({"_id": ObjectId(id)})
if album is None:
raise InvalidId(id)
except InvalidId:
raise AlbumNotFoundError(id)
except InvalidId as exc:
raise AlbumNotFoundError(id) from exc
if re.search(re.compile("^[a-z,0-9,_]*$"), name) is False:
raise AlbumIncorrectError("name", "can only contain a-z, 0-9 and _ characters.")
@ -182,18 +184,18 @@ async def album_put(
if 2 > len(title) > 40:
raise AlbumIncorrectError("title", "must be >2 and <40 characters.")
image = col_photos.find_one({"_id": ObjectId(cover), "album": album["name"]})
image = await col_photos.find_one({"_id": ObjectId(cover), "album": album["name"]})
cover = image["_id"].__str__() if image is not None else None # type: ignore
rename(
path.join("data", "users", current_user.user, "albums", album["name"]),
path.join("data", "users", current_user.user, "albums", name),
Path(f"data/users/{current_user.user}/albums/{album['name']}"),
Path(f"data/users/{current_user.user}/albums/{name}"),
)
col_photos.update_many(
await col_photos.update_many(
{"user": current_user.user, "album": album["name"]}, {"$set": {"album": name}}
)
col_albums.update_one(
await col_albums.update_one(
{"_id": ObjectId(id)}, {"$set": {"name": name, "title": title, "cover": cover}}
)
@ -214,14 +216,14 @@ async def album_delete(
current_user: User = Security(get_current_active_user, scopes=["albums.write"]),
):
try:
album = col_albums.find_one_and_delete({"_id": ObjectId(id)})
album = await col_albums.find_one_and_delete({"_id": ObjectId(id)})
if album is None:
raise InvalidId(id)
except InvalidId:
raise AlbumNotFoundError(id)
except InvalidId as exc:
raise AlbumNotFoundError(id) from exc
col_photos.delete_many({"album": album["name"]})
await col_photos.delete_many({"album": album["name"]})
rmtree(path.join("data", "users", current_user.user, "albums", album["name"]))
rmtree(Path(f"data/users/{current_user.user}/albums/{album['name']}"))
return Response(status_code=HTTP_204_NO_CONTENT)

View File

@ -1,16 +1,34 @@
from fastapi import Request
from fastapi.responses import UJSONResponse
from modules.app import app
from classes.exceptions import *
from starlette.status import (
HTTP_400_BAD_REQUEST,
HTTP_401_UNAUTHORIZED,
HTTP_403_FORBIDDEN,
HTTP_404_NOT_FOUND,
HTTP_406_NOT_ACCEPTABLE,
HTTP_409_CONFLICT,
HTTP_422_UNPROCESSABLE_ENTITY,
)
from classes.exceptions import (
AccessTokenInvalidError,
AlbumAlreadyExistsError,
AlbumIncorrectError,
AlbumNotFoundError,
PhotoNotFoundError,
PhotoSearchQueryEmptyError,
SearchLimitInvalidError,
SearchPageInvalidError,
SearchTokenInvalidError,
UserAlreadyExists,
UserCredentialsInvalid,
UserEmailCodeInvalid,
UserMediaQuotaReached,
VideoNotFoundError,
VideoSearchQueryEmptyError,
)
from modules.app import app
@app.exception_handler(AlbumNotFoundError)
async def album_not_found_exception_handler(request: Request, exc: AlbumNotFoundError):
@ -78,12 +96,24 @@ async def video_search_query_empty_exception_handler(
)
@app.exception_handler(SearchLimitInvalidError)
async def search_limit_invalid_exception_handler(
request: Request, exc: SearchLimitInvalidError
):
return UJSONResponse(
status_code=HTTP_400_BAD_REQUEST,
content={
"detail": "Parameter 'limit' must be greater or equal to 1."
},
)
@app.exception_handler(SearchPageInvalidError)
async def search_page_invalid_exception_handler(
request: Request, exc: SearchPageInvalidError
):
return UJSONResponse(
status_code=HTTP_400_BAD_REQUEST,
status_code=HTTP_401_UNAUTHORIZED,
content={
"detail": "Parameters 'page' and 'page_size' must be greater or equal to 1."
},
@ -97,7 +127,7 @@ async def search_token_invalid_exception_handler(
return UJSONResponse(
status_code=HTTP_401_UNAUTHORIZED,
content={
"detail": "Parameters 'page' and 'page_size' must be greater or equal to 1."
"detail": "Invalid search token."
},
)
@ -140,3 +170,13 @@ async def user_credentials_invalid_exception_handler(
status_code=HTTP_401_UNAUTHORIZED,
content={"detail": "Invalid credentials."},
)
@app.exception_handler(UserMediaQuotaReached)
async def user_media_quota_reached_exception_handler(
request: Request, exc: UserMediaQuotaReached
):
return UJSONResponse(
status_code=HTTP_403_FORBIDDEN,
content={"detail": "Media quota has been reached, media upload impossible."},
)

View File

@ -1,31 +1,36 @@
from os import path
from modules.app import app
from pathlib import Path
import aiofiles
from fastapi.responses import HTMLResponse, Response
from modules.app import app
@app.get("/pages/matter.css", include_in_schema=False)
async def page_matter():
with open(path.join("pages", "matter.css"), "r", encoding="utf-8") as f:
output = f.read()
async with aiofiles.open(Path("pages/matter.css"), "r", encoding="utf-8") as f:
output = await f.read()
return Response(content=output)
@app.get("/pages/{page}/{file}", include_in_schema=False)
async def page_assets(page: str, file: str):
with open(path.join("pages", page, file), "r", encoding="utf-8") as f:
output = f.read()
async with aiofiles.open(Path(f"pages/{page}/{file}"), "r", encoding="utf-8") as f:
output = await f.read()
return Response(content=output)
@app.get("/", include_in_schema=False)
async def page_home():
with open(path.join("pages", "home", "index.html"), "r", encoding="utf-8") as f:
output = f.read()
async with aiofiles.open(Path("pages/home/index.html"), "r", encoding="utf-8") as f:
output = await f.read()
return HTMLResponse(content=output)
@app.get("/register", include_in_schema=False)
async def page_register():
with open(path.join("pages", "register", "index.html"), "r", encoding="utf-8") as f:
output = f.read()
async with aiofiles.open(
Path("pages/register/index.html"), "r", encoding="utf-8"
) as f:
output = await f.read()
return HTMLResponse(content=output)

View File

@ -1,26 +1,47 @@
import logging
import re
import pickle
from datetime import datetime, timedelta, timezone
from os import makedirs, path, remove, system
from pathlib import Path
from random import randint
from secrets import token_urlsafe
from shutil import move
from threading import Thread
from typing import Union
from uuid import uuid4
from magic import Magic
from datetime import datetime, timedelta, timezone
from os import makedirs, path, remove, system
import aiofiles
from bson.errors import InvalidId
from bson.objectid import ObjectId
from fastapi import Security, UploadFile
from fastapi.responses import Response, UJSONResponse
from jose import JWTError, jwt
from magic import Magic
from plum.exceptions import UnpackError
from pydantic import ValidationError
from pymongo import DESCENDING
from starlette.status import HTTP_204_NO_CONTENT, HTTP_409_CONFLICT
from classes.exceptions import (
AccessTokenInvalidError,
AlbumNameNotFoundError,
PhotoNotFoundError,
PhotoSearchQueryEmptyError,
SearchLimitInvalidError,
SearchPageInvalidError,
SearchTokenInvalidError,
UserMediaQuotaReached,
)
from classes.models import Photo, PhotoPublic, SearchResultsPhoto
from classes.models import (
Photo,
PhotoPublic,
RandomSearchResultsPhoto,
SearchResultsPhoto,
)
from modules.app import app
from modules.database import col_albums, col_photos, col_tokens, col_videos
from modules.exif_reader import extract_location
from modules.hasher import get_phash, get_duplicates
from modules.hasher import get_duplicates, get_phash
from modules.scheduler import scheduler
from modules.security import (
ALGORITHM,
@ -31,31 +52,18 @@ from modules.security import (
get_current_active_user,
get_user,
)
from modules.app import app
from modules.database import col_photos, col_albums, col_tokens
from pymongo import DESCENDING
from bson.objectid import ObjectId
from bson.errors import InvalidId
from plum.exceptions import UnpackError
from jose import JWTError, jwt
from modules.utils import configGet
from fastapi import UploadFile, Security
from fastapi.responses import UJSONResponse, Response
from fastapi.exceptions import HTTPException
from starlette.status import (
HTTP_204_NO_CONTENT,
HTTP_401_UNAUTHORIZED,
HTTP_409_CONFLICT,
)
from modules.utils import configGet, logWrite
logger = logging.getLogger(__name__)
async def compress_image(image_path: str):
image_type = Magic(mime=True).from_file(image_path)
if image_type not in ["image/jpeg", "image/png"]:
logWrite(f"Not compressing {image_path} because its mime is '{image_type}'")
logger.info(
"Not compressing %s because its mime is '%s'", image_path, image_type
)
return
size_before = path.getsize(image_path) / 1024
@ -71,16 +79,20 @@ async def compress_image(image_path: str):
return
task.start()
logWrite(f"Compressing '{path.split(image_path)[-1]}'...")
logger.info("Compressing '%s'...", Path(image_path).name)
task.join()
size_after = path.getsize(image_path) / 1024
logWrite(
f"Compressed '{path.split(image_path)[-1]}' from {size_before} Kb to {size_after} Kb"
logger.info(
"Compressed '%s' from %s Kb to %s Kb",
Path(image_path).name,
size_before,
size_after,
)
photo_post_responses = {
403: UserMediaQuotaReached().openapi,
404: AlbumNameNotFoundError("name").openapi,
409: {
"description": "Image Duplicates Found",
@ -112,39 +124,40 @@ async def photo_upload(
caption: Union[str, None] = None,
current_user: User = Security(get_current_active_user, scopes=["photos.write"]),
):
if col_albums.find_one({"user": current_user.user, "name": album}) is None:
if (await col_albums.find_one({"user": current_user.user, "name": album})) is None:
raise AlbumNameNotFoundError(album)
makedirs(
path.join("data", "users", current_user.user, "albums", album), exist_ok=True
)
user_media_count = (
await col_photos.count_documents({"user": current_user.user})
) + (await col_videos.count_documents({"user": current_user.user}))
if user_media_count >= current_user.quota and not current_user.quota == -1: # type: ignore
raise UserMediaQuotaReached()
makedirs(Path(f"data/users/{current_user.user}/albums/{album}"), exist_ok=True)
filename = file.filename
if path.exists(
path.join("data", "users", current_user.user, "albums", album, file.filename)
):
if Path(f"data/users/{current_user.user}/albums/{album}/{file.filename}").exists():
base_name = file.filename.split(".")[:-1]
extension = file.filename.split(".")[-1]
filename = (
".".join(base_name) + f"_{int(datetime.now().timestamp())}." + extension
)
with open(
path.join("data", "users", current_user.user, "albums", album, filename), "wb"
async with aiofiles.open(
Path(f"data/users/{current_user.user}/albums/{album}/{filename}"), "wb"
) as f:
f.write(await file.read())
await f.write(await file.read())
file_hash = await get_phash(
path.join("data", "users", current_user.user, "albums", album, filename)
Path(f"data/users/{current_user.user}/albums/{album}/{filename}")
)
duplicates = await get_duplicates(file_hash, album)
if len(duplicates) > 0 and ignore_duplicates is False:
if len(duplicates) > 0 and not ignore_duplicates:
if configGet("media_token_access") is True:
duplicates_ids = []
for entry in duplicates:
duplicates_ids.append(entry["id"])
duplicates_ids = [entry["id"] for entry in duplicates]
access_token = create_access_token(
data={
"sub": current_user.user,
@ -154,7 +167,7 @@ async def photo_upload(
expires_delta=timedelta(hours=configGet("media_token_valid_hours")),
)
access_token_short = uuid4().hex[:12].lower()
col_tokens.insert_one(
await col_tokens.insert_one(
{
"short": access_token_short,
"access_token": access_token,
@ -174,12 +187,12 @@ async def photo_upload(
try:
coords = extract_location(
path.join("data", "users", current_user.user, "albums", album, filename)
Path(f"data/users/{current_user.user}/albums/{album}/{filename}")
)
except (UnpackError, ValueError):
coords = {"lng": 0.0, "lat": 0.0, "alt": 0.0}
uploaded = col_photos.insert_one(
uploaded = await col_photos.insert_one(
{
"user": current_user.user,
"album": album,
@ -194,14 +207,12 @@ async def photo_upload(
}
)
if compress is True:
if compress:
scheduler.add_job(
compress_image,
trigger="date",
run_date=datetime.now() + timedelta(seconds=1),
args=[
path.join("data", "users", current_user.user, "albums", album, filename)
],
args=[Path(f"data/users/{current_user.user}/albums/{album}/{filename}")],
)
return UJSONResponse(
@ -229,7 +240,7 @@ if configGet("media_token_access") is True:
responses=photo_get_token_responses,
)
async def photo_get_token(token: str, id: int):
db_entry = col_tokens.find_one({"short": token})
db_entry = await col_tokens.find_one({"short": token})
if db_entry is None:
raise AccessTokenInvalidError()
@ -244,57 +255,74 @@ if configGet("media_token_access") is True:
raise AccessTokenInvalidError()
token_scopes = payload.get("scopes", [])
token_data = TokenData(scopes=token_scopes, user=user)
except (JWTError, ValidationError) as exp:
print(exp, flush=True)
raise AccessTokenInvalidError()
except (JWTError, ValidationError) as exc:
raise AccessTokenInvalidError() from exc
user = get_user(user=token_data.user)
user_record = await get_user(user=token_data.user)
if id not in payload.get("allowed", []):
raise AccessTokenInvalidError()
try:
image = col_photos.find_one({"_id": ObjectId(id)})
image = await col_photos.find_one({"_id": ObjectId(id)})
if image is None:
raise InvalidId(id)
except InvalidId:
raise PhotoNotFoundError(id)
except InvalidId as exc:
raise PhotoNotFoundError(id) from exc
image_path = path.join(
"data", "users", user.user, "albums", image["album"], image["filename"]
image_path = Path(
f"data/users/{user_record.user}/albums/{image['album']}/{image['filename']}"
)
mime = Magic(mime=True).from_file(image_path)
with open(image_path, "rb") as f:
image_file = f.read()
async with aiofiles.open(image_path, "rb") as f:
image_file = await f.read()
return Response(image_file, media_type=mime)
photo_get_responses = {404: PhotoNotFoundError("id").openapi}
photo_get_responses = {
200: {
"content": {
"application/octet-stream": {
"schema": {
"type": "string",
"format": "binary",
"contentMediaType": "image/*",
}
}
}
},
404: PhotoNotFoundError("id").openapi,
}
@app.get("/photos/{id}", description="Get a photo by id", responses=photo_get_responses)
@app.get(
"/photos/{id}",
description="Get a photo by id",
responses=photo_get_responses,
response_class=Response,
)
async def photo_get(
id: str,
current_user: User = Security(get_current_active_user, scopes=["photos.read"]),
):
try:
image = col_photos.find_one({"_id": ObjectId(id)})
image = await col_photos.find_one({"_id": ObjectId(id)})
if image is None:
raise InvalidId(id)
except InvalidId:
raise PhotoNotFoundError(id)
except InvalidId as exc:
raise PhotoNotFoundError(id) from exc
image_path = path.join(
"data", "users", current_user.user, "albums", image["album"], image["filename"]
image_path = Path(
f"data/users/{current_user.user}/albums/{image['album']}/{image['filename']}"
)
mime = Magic(mime=True).from_file(image_path)
with open(image_path, "rb") as f:
image_file = f.read()
async with aiofiles.open(image_path, "rb") as f:
image_file = await f.read()
return Response(image_file, media_type=mime)
@ -314,20 +342,18 @@ async def photo_move(
current_user: User = Security(get_current_active_user, scopes=["photos.write"]),
):
try:
image = col_photos.find_one({"_id": ObjectId(id)})
image = await col_photos.find_one({"_id": ObjectId(id)})
if image is None:
raise InvalidId(id)
except InvalidId:
raise PhotoNotFoundError(id)
except InvalidId as exc:
raise PhotoNotFoundError(id) from exc
if col_albums.find_one({"user": current_user.user, "name": album}) is None:
if (await col_albums.find_one({"user": current_user.user, "name": album})) is None:
raise AlbumNameNotFoundError(album)
if path.exists(
path.join(
"data", "users", current_user.user, "albums", album, image["filename"]
)
):
if Path(
f"data/users/{current_user.user}/albums/{album}/{image['filename']}"
).exists():
base_name = image["filename"].split(".")[:-1]
extension = image["filename"].split(".")[-1]
filename = (
@ -336,7 +362,7 @@ async def photo_move(
else:
filename = image["filename"]
col_photos.find_one_and_update(
await col_photos.find_one_and_update(
{"_id": ObjectId(id)},
{
"$set": {
@ -348,15 +374,10 @@ async def photo_move(
)
move(
path.join(
"data",
"users",
current_user.user,
"albums",
image["album"],
image["filename"],
Path(
f"data/users/{current_user.user}/albums/{image['album']}/{image['filename']}"
),
path.join("data", "users", current_user.user, "albums", album, filename),
Path(f"data/users/{current_user.user}/albums/{album}/{filename}"),
)
return UJSONResponse(
@ -383,13 +404,13 @@ async def photo_patch(
current_user: User = Security(get_current_active_user, scopes=["photos.write"]),
):
try:
image = col_photos.find_one({"_id": ObjectId(id)})
image = await col_photos.find_one({"_id": ObjectId(id)})
if image is None:
raise InvalidId(id)
except InvalidId:
raise PhotoNotFoundError(id)
except InvalidId as exc:
raise PhotoNotFoundError(id) from exc
col_photos.find_one_and_update(
await col_photos.find_one_and_update(
{"_id": ObjectId(id)},
{"$set": {"caption": caption, "dates.modified": datetime.now(tz=timezone.utc)}},
)
@ -417,33 +438,90 @@ async def photo_delete(
current_user: User = Security(get_current_active_user, scopes=["photos.write"]),
):
try:
image = col_photos.find_one_and_delete({"_id": ObjectId(id)})
image = await col_photos.find_one_and_delete({"_id": ObjectId(id)})
if image is None:
raise InvalidId(id)
except InvalidId:
raise PhotoNotFoundError(id)
except InvalidId as exc:
raise PhotoNotFoundError(id) from exc
album = col_albums.find_one({"name": image["album"]})
album = await col_albums.find_one({"name": image["album"]})
if album is not None and album["cover"] == image["_id"].__str__():
col_albums.update_one({"name": image["album"]}, {"$set": {"cover": None}})
await col_albums.update_one({"name": image["album"]}, {"$set": {"cover": None}})
remove(
path.join(
"data",
"users",
current_user.user,
"albums",
image["album"],
image["filename"],
Path(
f"data/users/{current_user.user}/albums/{image['album']}/{image['filename']}"
)
)
return Response(status_code=HTTP_204_NO_CONTENT)
photo_random_responses = {
400: SearchLimitInvalidError().openapi,
404: AlbumNameNotFoundError("name").openapi,
}
@app.get(
"/albums/{album}/photos/random",
description="Get one random photo, optionally by caption",
response_class=UJSONResponse,
response_model=RandomSearchResultsPhoto,
responses=photo_random_responses,
)
async def photo_random(
album: str,
caption: Union[str, None] = None,
limit: int = 100,
current_user: User = Security(get_current_active_user, scopes=["photos.list"]),
):
if (await col_albums.find_one({"user": current_user.user, "name": album})) is None:
raise AlbumNameNotFoundError(album)
if limit <= 0:
raise SearchLimitInvalidError()
output = {"results": []}
db_query = (
{
"user": current_user.user,
"album": album,
"caption": re.compile(caption),
}
if caption is not None
else {
"user": current_user.user,
"album": album,
}
)
documents_count = await col_photos.count_documents(db_query)
skip = randint(0, documents_count - 1) if documents_count > 1 else 0
async for image in col_photos.aggregate(
[
{"$match": db_query},
{"$skip": skip},
{"$limit": limit},
]
):
output["results"].append(
{
"id": image["_id"].__str__(),
"filename": image["filename"],
"caption": image["caption"],
}
)
return UJSONResponse(output)
photo_find_responses = {
400: SearchPageInvalidError().openapi,
401: SearchTokenInvalidError().openapi,
404: AlbumNameNotFoundError("name").openapi,
422: PhotoSearchQueryEmptyError().openapi,
}
@ -451,7 +529,7 @@ photo_find_responses = {
@app.get(
"/albums/{album}/photos",
description="Find a photo by filename",
description="Find a photo by filename, caption, location or token",
response_class=UJSONResponse,
response_model=SearchResultsPhoto,
responses=photo_find_responses,
@ -460,6 +538,7 @@ async def photo_find(
album: str,
q: Union[str, None] = None,
caption: Union[str, None] = None,
token: Union[str, None] = None,
page: int = 1,
page_size: int = 100,
lat: Union[float, None] = None,
@ -467,7 +546,25 @@ async def photo_find(
radius: Union[int, None] = None,
current_user: User = Security(get_current_active_user, scopes=["photos.list"]),
):
if col_albums.find_one({"user": current_user.user, "name": album}) is None:
if token is not None:
found_record = await col_tokens.find_one({"token": token})
if found_record is None:
raise SearchTokenInvalidError()
return await photo_find(
album=album,
q=found_record["query"],
caption=found_record["caption"],
lat=found_record["lat"],
lng=found_record["lng"],
radius=found_record["radius"],
page=found_record["page"],
page_size=found_record["page_size"],
current_user=current_user,
)
if (await col_albums.find_one({"user": current_user.user, "name": album})) is None:
raise AlbumNameNotFoundError(album)
if page <= 0 or page_size <= 0:
@ -496,7 +593,7 @@ async def photo_find(
}
elif q is None and caption is None:
raise PhotoSearchQueryEmptyError()
elif q is None and caption is not None:
elif q is None:
db_query = {
"user": current_user.user,
"album": album,
@ -507,7 +604,7 @@ async def photo_find(
"album": album,
"caption": re.compile(caption),
}
elif q is not None and caption is None:
elif caption is None:
db_query = {
"user": current_user.user,
"album": album,
@ -519,16 +616,22 @@ async def photo_find(
"filename": re.compile(q),
}
else:
db_query = {"user": current_user.user, "album": album, "filename": re.compile(q), "caption": re.compile(caption)} # type: ignore
db_query_count = {"user": current_user.user, "album": album, "filename": re.compile(q), "caption": re.compile(caption)} # type: ignore
db_query = {
"user": current_user.user,
"album": album,
"filename": re.compile(q),
"caption": re.compile(caption),
}
db_query_count = {
"user": current_user.user,
"album": album,
"filename": re.compile(q),
"caption": re.compile(caption),
}
images = list(
col_photos.find(db_query, limit=page_size, skip=skip).sort(
"dates.uploaded", DESCENDING
)
)
for image in images:
async for image in col_photos.find(db_query, limit=page_size, skip=skip).sort(
"dates.uploaded", direction=DESCENDING
):
output["results"].append(
{
"id": image["_id"].__str__(),
@ -537,45 +640,22 @@ async def photo_find(
}
)
if col_photos.count_documents(db_query_count) > page * page_size:
if (await col_photos.count_documents(db_query_count)) > page * page_size:
token = str(token_urlsafe(32))
col_tokens.insert_one(
await col_tokens.insert_one(
{
"token": token,
"query": q,
"album": album,
"caption": caption,
"lat": lat,
"lng": lng,
"radius": radius,
"page": page + 1,
"page_size": page_size,
"user": pickle.dumps(current_user),
}
)
output["next_page"] = f"/albums/{album}/photos/token?token={token}" # type: ignore
output["next_page"] = f"/albums/{album}/photos/?token={token}" # type: ignore
else:
output["next_page"] = None # type: ignore
return UJSONResponse(output)
photo_find_token_responses = {401: SearchTokenInvalidError().openapi}
@app.get(
"/albums/{album}/photos/token",
description="Find a photo by token",
response_class=UJSONResponse,
response_model=SearchResultsPhoto,
responses=photo_find_token_responses,
)
async def photo_find_token(token: str):
found_record = col_tokens.find_one({"token": token})
if found_record is None:
raise SearchTokenInvalidError()
return await photo_find(
q=found_record["query"],
album=found_record["album"],
page=found_record["page"],
page_size=found_record["page_size"],
current_user=pickle.loads(found_record["user"]),
)

View File

@ -1,12 +1,10 @@
from datetime import timedelta
from classes.exceptions import UserCredentialsInvalid
from modules.app import app
from fastapi import Depends
from fastapi.security import (
OAuth2PasswordRequestForm,
)
from fastapi.security import OAuth2PasswordRequestForm
from classes.exceptions import UserCredentialsInvalid
from modules.app import app
from modules.security import (
ACCESS_TOKEN_EXPIRE_DAYS,
Token,
@ -19,7 +17,7 @@ token_post_responses = {401: UserCredentialsInvalid().openapi}
@app.post("/token", response_model=Token, responses=token_post_responses)
async def login_for_access_token(form_data: OAuth2PasswordRequestForm = Depends()):
user = authenticate_user(form_data.username, form_data.password)
user = await authenticate_user(form_data.username, form_data.password)
if not user:
raise UserCredentialsInvalid()
access_token_expires = timedelta(days=ACCESS_TOKEN_EXPIRE_DAYS)

View File

@ -1,27 +1,20 @@
import logging
from datetime import datetime, timedelta
from uuid import uuid1
from fastapi import Depends, Form
from fastapi.responses import Response, UJSONResponse
from starlette.status import HTTP_204_NO_CONTENT
from classes.exceptions import (
UserAlreadyExists,
UserCredentialsInvalid,
UserEmailCodeInvalid,
)
from modules.database import (
col_users,
col_albums,
col_photos,
col_emails,
col_videos,
col_emails,
)
from modules.app import app
from modules.utils import configGet, logWrite
from modules.scheduler import scheduler
from modules.database import col_albums, col_emails, col_photos, col_users, col_videos
from modules.mailer import mail_sender
from uuid import uuid1
from fastapi import Depends, Form
from fastapi.responses import Response, UJSONResponse
from starlette.status import HTTP_204_NO_CONTENT
from modules.scheduler import scheduler
from modules.security import (
User,
get_current_active_user,
@ -29,6 +22,9 @@ from modules.security import (
get_user,
verify_password,
)
from modules.utils import configGet
logger = logging.getLogger(__name__)
async def send_confirmation(user: str, email: str):
@ -45,12 +41,14 @@ async def send_confirmation(user: str, email: str):
+ f"/users/{user}/confirm?code={confirmation_code}"
),
)
col_emails.insert_one(
await col_emails.insert_one(
{"user": user, "email": email, "used": False, "code": confirmation_code}
)
logWrite(f"Sent confirmation email to '{email}' with code {confirmation_code}")
except Exception as exp:
logWrite(f"Could not send confirmation email to '{email}' due to: {exp}")
logger.info(
"Sent confirmation email to '%s' with code %s", email, confirmation_code
)
except Exception as exc:
logger.error("Could not send confirmation email to '%s' due to: %s", email, exc)
@app.get("/users/me/", response_model=User)
@ -82,15 +80,15 @@ if configGet("registration_requires_confirmation") is True:
responses=user_confirm_responses,
)
async def user_confirm(user: str, code: str):
confirm_record = col_emails.find_one(
confirm_record = await col_emails.find_one(
{"user": user, "code": code, "used": False}
)
if confirm_record is None:
raise UserEmailCodeInvalid()
col_emails.find_one_and_update(
await col_emails.find_one_and_update(
{"_id": confirm_record["_id"]}, {"$set": {"used": True}}
)
col_users.find_one_and_update(
await col_users.find_one_and_update(
{"user": confirm_record["user"]}, {"$set": {"disabled": False}}
)
return UJSONResponse({"detail": configGet("email_confirmed", "messages")})
@ -105,12 +103,13 @@ if configGet("registration_enabled") is True:
async def user_create(
user: str = Form(), email: str = Form(), password: str = Form()
):
if col_users.find_one({"user": user}) is not None:
if (await col_users.find_one({"user": user})) is not None:
raise UserAlreadyExists()
col_users.insert_one(
await col_users.insert_one(
{
"user": user,
"email": email,
"quota": None,
"hash": get_password_hash(password),
"disabled": configGet("registration_requires_confirmation"),
}
@ -134,14 +133,14 @@ user_delete_responses = {401: UserCredentialsInvalid().openapi}
async def user_delete(
password: str = Form(), current_user: User = Depends(get_current_active_user)
):
user = get_user(current_user.user)
user = await get_user(current_user.user)
if not user:
return False
if not verify_password(password, user.hash):
raise UserCredentialsInvalid()
col_users.delete_many({"user": current_user.user})
col_emails.delete_many({"user": current_user.user})
col_photos.delete_many({"user": current_user.user})
col_videos.delete_many({"user": current_user.user})
col_albums.delete_many({"user": current_user.user})
await col_users.delete_many({"user": current_user.user})
await col_emails.delete_many({"user": current_user.user})
await col_photos.delete_many({"user": current_user.user})
await col_videos.delete_many({"user": current_user.user})
await col_albums.delete_many({"user": current_user.user})
return Response(status_code=HTTP_204_NO_CONTENT)

View File

@ -1,31 +1,44 @@
import re
import pickle
from datetime import datetime, timezone
from os import makedirs, remove
from pathlib import Path
from random import randint
from secrets import token_urlsafe
from shutil import move
from typing import Union
import aiofiles
from bson.errors import InvalidId
from bson.objectid import ObjectId
from fastapi import Security, UploadFile
from fastapi.responses import Response, UJSONResponse
from magic import Magic
from datetime import datetime, timezone
from os import makedirs, path, remove
from pymongo import DESCENDING
from starlette.status import HTTP_204_NO_CONTENT
from classes.exceptions import (
AlbumNameNotFoundError,
SearchLimitInvalidError,
SearchPageInvalidError,
SearchTokenInvalidError,
UserMediaQuotaReached,
VideoNotFoundError,
VideoSearchQueryEmptyError,
)
from classes.models import Video, SearchResultsVideo, VideoPublic
from modules.security import User, get_current_active_user
from classes.models import (
RandomSearchResultsVideo,
SearchResultsVideo,
Video,
VideoPublic,
)
from modules.app import app
from modules.database import col_videos, col_albums, col_tokens
from bson.objectid import ObjectId
from bson.errors import InvalidId
from pymongo import DESCENDING
from modules.database import col_albums, col_photos, col_tokens, col_videos
from modules.security import User, get_current_active_user
from fastapi import UploadFile, Security
from fastapi.responses import UJSONResponse, Response
from starlette.status import HTTP_204_NO_CONTENT
video_post_responses = {404: AlbumNameNotFoundError("name").openapi}
video_post_responses = {
403: UserMediaQuotaReached().openapi,
404: AlbumNameNotFoundError("name").openapi,
}
@app.post(
@ -41,34 +54,37 @@ async def video_upload(
caption: Union[str, None] = None,
current_user: User = Security(get_current_active_user, scopes=["videos.write"]),
):
if col_albums.find_one({"user": current_user.user, "name": album}) is None:
if (await col_albums.find_one({"user": current_user.user, "name": album})) is None:
raise AlbumNameNotFoundError(album)
makedirs(
path.join("data", "users", current_user.user, "albums", album), exist_ok=True
)
user_media_count = (
await col_videos.count_documents({"user": current_user.user})
) + (await col_photos.count_documents({"user": current_user.user}))
if user_media_count >= current_user.quota and not current_user.quota == -1: # type: ignore
raise UserMediaQuotaReached()
makedirs(Path(f"data/users/{current_user.user}/albums/{album}"), exist_ok=True)
filename = file.filename
if path.exists(
path.join("data", "users", current_user.user, "albums", album, file.filename)
):
if Path(f"data/users/{current_user.user}/albums/{album}/{file.filename}").exists():
base_name = file.filename.split(".")[:-1]
extension = file.filename.split(".")[-1]
filename = (
".".join(base_name) + f"_{int(datetime.now().timestamp())}." + extension
)
with open(
path.join("data", "users", current_user.user, "albums", album, filename), "wb"
async with aiofiles.open(
Path(f"data/users/{current_user.user}/albums/{album}/{filename}"), "wb"
) as f:
f.write(await file.read())
await f.write(await file.read())
# Hashing and duplicates check should be here
# Coords extraction should be here
uploaded = col_videos.insert_one(
uploaded = await col_videos.insert_one(
{
"user": current_user.user,
"album": album,
@ -91,31 +107,49 @@ async def video_upload(
)
video_get_responses = {404: VideoNotFoundError("id").openapi}
video_get_responses = {
200: {
"content": {
"application/octet-stream": {
"schema": {
"type": "string",
"format": "binary",
"contentMediaType": "video/*",
}
}
}
},
404: VideoNotFoundError("id").openapi,
}
@app.get("/videos/{id}", description="Get a video by id", responses=video_get_responses)
@app.get(
"/videos/{id}",
description="Get a video by id",
responses=video_get_responses,
response_class=Response,
)
async def video_get(
id: str,
current_user: User = Security(get_current_active_user, scopes=["videos.read"]),
):
try:
video = col_videos.find_one({"_id": ObjectId(id)})
video = await col_videos.find_one({"_id": ObjectId(id)})
if video is None:
raise InvalidId(id)
except InvalidId:
raise VideoNotFoundError(id)
except InvalidId as exc:
raise VideoNotFoundError(id) from exc
video_path = path.join(
"data", "users", current_user.user, "albums", video["album"], video["filename"]
video_path = Path(
f"data/users/{current_user.user}/albums/{video['album']}/{video['filename']}"
)
mime = Magic(mime=True).from_file(video_path)
with open(video_path, "rb") as f:
video_file = f.read()
async with aiofiles.open(video_path, "rb") as f:
video_file = await f.read()
return Response(video_file, media_type=mime)
return Response(content=video_file, media_type=mime)
video_move_responses = {404: VideoNotFoundError("id").openapi}
@ -133,20 +167,18 @@ async def video_move(
current_user: User = Security(get_current_active_user, scopes=["videos.write"]),
):
try:
video = col_videos.find_one({"_id": ObjectId(id)})
video = await col_videos.find_one({"_id": ObjectId(id)})
if video is None:
raise InvalidId(id)
except InvalidId:
raise VideoNotFoundError(id)
except InvalidId as exc:
raise VideoNotFoundError(id) from exc
if col_albums.find_one({"user": current_user.user, "name": album}) is None:
if (await col_albums.find_one({"user": current_user.user, "name": album})) is None:
raise AlbumNameNotFoundError(album)
if path.exists(
path.join(
"data", "users", current_user.user, "albums", album, video["filename"]
)
):
if Path(
f"data/users/{current_user.user}/albums/{album}/{video['filename']}"
).exists():
base_name = video["filename"].split(".")[:-1]
extension = video["filename"].split(".")[-1]
filename = (
@ -155,7 +187,7 @@ async def video_move(
else:
filename = video["filename"]
col_videos.find_one_and_update(
await col_videos.find_one_and_update(
{"_id": ObjectId(id)},
{
"$set": {
@ -167,15 +199,10 @@ async def video_move(
)
move(
path.join(
"data",
"users",
current_user.user,
"albums",
video["album"],
video["filename"],
Path(
f"data/users/{current_user.user}/albums/{video['album']}/{video['filename']}"
),
path.join("data", "users", current_user.user, "albums", album, filename),
Path(f"data/users/{current_user.user}/albums/{album}/{filename}"),
)
return UJSONResponse(
@ -202,13 +229,13 @@ async def video_patch(
current_user: User = Security(get_current_active_user, scopes=["videos.write"]),
):
try:
video = col_videos.find_one({"_id": ObjectId(id)})
video = await col_videos.find_one({"_id": ObjectId(id)})
if video is None:
raise InvalidId(id)
except InvalidId:
raise VideoNotFoundError(id)
except InvalidId as exc:
raise VideoNotFoundError(id) from exc
col_videos.find_one_and_update(
await col_videos.find_one_and_update(
{"_id": ObjectId(id)},
{"$set": {"caption": caption, "dates.modified": datetime.now(tz=timezone.utc)}},
)
@ -236,30 +263,87 @@ async def video_delete(
current_user: User = Security(get_current_active_user, scopes=["videos.write"]),
):
try:
video = col_videos.find_one_and_delete({"_id": ObjectId(id)})
video = await col_videos.find_one_and_delete({"_id": ObjectId(id)})
if video is None:
raise InvalidId(id)
except InvalidId:
raise VideoNotFoundError(id)
except InvalidId as exc:
raise VideoNotFoundError(id) from exc
album = col_albums.find_one({"name": video["album"]})
album = await col_albums.find_one({"name": video["album"]})
remove(
path.join(
"data",
"users",
current_user.user,
"albums",
video["album"],
video["filename"],
Path(
f"data/users/{current_user.user}/albums/{video['album']}/{video['filename']}"
)
)
return Response(status_code=HTTP_204_NO_CONTENT)
video_random_responses = {
400: SearchLimitInvalidError().openapi,
404: AlbumNameNotFoundError("name").openapi,
}
@app.get(
"/albums/{album}/videos/random",
description="Get one random video, optionally by caption",
response_class=UJSONResponse,
response_model=RandomSearchResultsVideo,
responses=video_random_responses,
)
async def video_random(
album: str,
caption: Union[str, None] = None,
limit: int = 100,
current_user: User = Security(get_current_active_user, scopes=["videos.list"]),
):
if (await col_albums.find_one({"user": current_user.user, "name": album})) is None:
raise AlbumNameNotFoundError(album)
if limit <= 0:
raise SearchLimitInvalidError()
output = {"results": []}
db_query = (
{
"user": current_user.user,
"album": album,
"caption": re.compile(caption),
}
if caption is not None
else {
"user": current_user.user,
"album": album,
}
)
documents_count = await col_videos.count_documents(db_query)
skip = randint(0, documents_count - 1) if documents_count > 1 else 0
async for video in col_videos.aggregate(
[
{"$match": db_query},
{"$skip": skip},
{"$limit": limit},
]
):
output["results"].append(
{
"id": video["_id"].__str__(),
"filename": video["filename"],
"caption": video["caption"],
}
)
return UJSONResponse(output)
video_find_responses = {
400: SearchPageInvalidError().openapi,
401: SearchTokenInvalidError().openapi,
404: AlbumNameNotFoundError("name").openapi,
422: VideoSearchQueryEmptyError().openapi,
}
@ -267,7 +351,7 @@ video_find_responses = {
@app.get(
"/albums/{album}/videos",
description="Find a video by filename",
description="Find a video by filename, caption or token",
response_class=UJSONResponse,
response_model=SearchResultsVideo,
responses=video_find_responses,
@ -276,11 +360,27 @@ async def video_find(
album: str,
q: Union[str, None] = None,
caption: Union[str, None] = None,
token: Union[str, None] = None,
page: int = 1,
page_size: int = 100,
current_user: User = Security(get_current_active_user, scopes=["videos.list"]),
):
if col_albums.find_one({"user": current_user.user, "name": album}) is None:
if token is not None:
found_record = await col_tokens.find_one({"token": token})
if found_record is None:
raise SearchTokenInvalidError()
return await video_find(
album=album,
q=found_record["query"],
caption=found_record["caption"],
page=found_record["page"],
page_size=found_record["page_size"],
current_user=current_user,
)
if (await col_albums.find_one({"user": current_user.user, "name": album})) is None:
raise AlbumNameNotFoundError(album)
if page <= 0 or page_size <= 0:
@ -292,7 +392,7 @@ async def video_find(
if q is None and caption is None:
raise VideoSearchQueryEmptyError()
if q is None and caption is not None:
if q is None:
db_query = {
"user": current_user.user,
"album": album,
@ -303,30 +403,34 @@ async def video_find(
"album": album,
"caption": re.compile(caption),
}
elif q is not None and caption is None:
db_query = list(
col_videos.find(
{"user": current_user.user, "album": album, "filename": re.compile(q)},
limit=page_size,
skip=skip,
).sort("dates.uploaded", DESCENDING)
)
elif caption is None:
db_query = {
"user": current_user.user,
"album": album,
"filename": re.compile(q),
}
db_query_count = {
"user": current_user.user,
"album": album,
"caption": re.compile(q),
}
else:
db_query = list(col_videos.find({"user": current_user.user, "album": album, "filename": re.compile(q), "caption": re.compile(caption)}, limit=page_size, skip=skip).sort("dates.uploaded", DESCENDING)) # type: ignore
db_query_count = {"user": current_user.user, "album": album, "filename": re.compile(q), "caption": re.compile(caption)} # type: ignore
db_query = {
"user": current_user.user,
"album": album,
"filename": re.compile(q),
"caption": re.compile(caption),
}
db_query_count = {
"user": current_user.user,
"album": album,
"filename": re.compile(q),
"caption": re.compile(caption),
}
videos = list(
col_videos.find(db_query, limit=page_size, skip=skip).sort(
"dates.uploaded", DESCENDING
)
)
for video in videos:
async for video in col_videos.find(db_query, limit=page_size, skip=skip).sort(
"dates.uploaded", direction=DESCENDING
):
output["results"].append(
{
"id": video["_id"].__str__(),
@ -335,45 +439,19 @@ async def video_find(
}
)
if col_videos.count_documents(db_query_count) > page * page_size:
if (await col_videos.count_documents(db_query_count)) > page * page_size:
token = str(token_urlsafe(32))
col_tokens.insert_one(
await col_tokens.insert_one(
{
"token": token,
"query": q,
"album": album,
"caption": caption,
"page": page + 1,
"page_size": page_size,
"user": pickle.dumps(current_user),
}
)
output["next_page"] = f"/albums/{album}/videos/token?token={token}" # type: ignore
output["next_page"] = f"/albums/{album}/videos/?token={token}" # type: ignore
else:
output["next_page"] = None # type: ignore
return UJSONResponse(output)
video_find_token_responses = {401: SearchTokenInvalidError().openapi}
@app.get(
"/albums/{album}/videos/token",
description="Find a video by token",
response_class=UJSONResponse,
response_model=SearchResultsVideo,
responses=video_find_token_responses,
)
async def video_find_token(token: str):
found_record = col_tokens.find_one({"token": token})
if found_record is None:
raise SearchTokenInvalidError()
return await video_find(
q=found_record["query"],
album=found_record["album"],
page=found_record["page"],
page_size=found_record["page_size"],
current_user=pickle.loads(found_record["user"]),
)

View File

@ -0,0 +1,9 @@
from mongodb_migrations.base import BaseMigration
class Migration(BaseMigration):
def upgrade(self):
self.db.users.update_many({}, {"$set": {"quota": None}})
def downgrade(self):
self.db.test_collection.update_many({}, {"$unset": "quota"})

View File

@ -1,15 +1,14 @@
from fastapi import FastAPI
from fastapi.openapi.docs import get_swagger_ui_html, get_redoc_html
from fastapi.openapi.docs import get_redoc_html, get_swagger_ui_html
app = FastAPI(title="END PLAY Photos", docs_url=None, redoc_url=None, version="0.1")
app = FastAPI(title="END PLAY Photos", docs_url=None, redoc_url=None, version="0.6")
@app.get("/docs", include_in_schema=False)
async def custom_swagger_ui_html():
return get_swagger_ui_html(
openapi_url=app.openapi_url, # type: ignore
title=app.title + " - Documentation",
openapi_url=app.openapi_url,
title=f"{app.title} - Documentation",
swagger_favicon_url="/favicon.ico",
)
@ -17,7 +16,7 @@ async def custom_swagger_ui_html():
@app.get("/redoc", include_in_schema=False)
async def custom_redoc_html():
return get_redoc_html(
openapi_url=app.openapi_url, # type: ignore
title=app.title + " - Documentation",
openapi_url=app.openapi_url,
title=f"{app.title} - Documentation",
redoc_favicon_url="/favicon.ico",
)

View File

@ -1,5 +1,7 @@
from async_pymongo import AsyncClient
from pymongo import GEOSPHERE, MongoClient
from modules.utils import configGet
from pymongo import MongoClient, GEOSPHERE
db_config = configGet("database")
@ -16,16 +18,11 @@ else:
db_config["host"], db_config["port"], db_config["name"]
)
db_client = MongoClient(con_string)
db_client = AsyncClient(con_string)
db_client_sync = MongoClient(con_string)
db = db_client.get_database(name=db_config["name"])
collections = db.list_collection_names()
for collection in ["users", "albums", "photos", "videos", "tokens", "emails"]:
if not collection in collections:
db.create_collection(collection)
col_users = db.get_collection("users")
col_albums = db.get_collection("albums")
col_photos = db.get_collection("photos")
@ -33,4 +30,4 @@ col_videos = db.get_collection("videos")
col_tokens = db.get_collection("tokens")
col_emails = db.get_collection("emails")
col_photos.create_index([("location", GEOSPHERE)])
db_client_sync[db_config["name"]]["photos"].create_index([("location", GEOSPHERE)])

View File

@ -1,3 +1,7 @@
import contextlib
from pathlib import Path
from typing import Mapping, Union
from exif import Image
@ -12,12 +16,14 @@ def decimal_coords(coords: float, ref: str) -> float:
* float: Decimal degrees
"""
decimal_degrees = coords[0] + coords[1] / 60 + coords[2] / 3600
if ref == "S" or ref == "W":
if ref in {"S", "W"}:
decimal_degrees = -decimal_degrees
return round(decimal_degrees, 5)
def extract_location(filepath: str) -> dict:
def extract_location(filepath: Union[str, Path]) -> Mapping[str, float]:
"""Get location data from image
### Args:
@ -35,11 +41,9 @@ def extract_location(filepath: str) -> dict:
if img.has_exif is False:
return output
try:
with contextlib.suppress(AttributeError):
output["lng"] = decimal_coords(img.gps_longitude, img.gps_longitude_ref)
output["lat"] = decimal_coords(img.gps_latitude, img.gps_latitude_ref)
output["alt"] = img.gps_altitude
except AttributeError:
pass
return output

View File

@ -1,5 +1,7 @@
from importlib.util import module_from_spec, spec_from_file_location
from os import getcwd, path, walk
from pathlib import Path
from typing import Union
# =================================================================================
@ -10,17 +12,21 @@ def get_py_files(src):
cwd = getcwd() # Current Working directory
py_files = []
for root, dirs, files in walk(src):
for file in files:
if file.endswith(".py"):
py_files.append(path.join(cwd, root, file))
py_files.extend(
Path(f"{cwd}/{root}/{file}") for file in files if file.endswith(".py")
)
return py_files
def dynamic_import(module_name, py_path):
def dynamic_import(module_name: str, py_path: str):
try:
module_spec = spec_from_file_location(module_name, py_path)
module = module_from_spec(module_spec) # type: ignore
module_spec.loader.exec_module(module) # type: ignore
if module_spec is None:
raise RuntimeError(
f"Module spec from module name {module_name} and path {py_path} is None"
)
module = module_from_spec(module_spec)
module_spec.loader.exec_module(module)
return module
except SyntaxError:
print(
@ -28,15 +34,15 @@ def dynamic_import(module_name, py_path):
flush=True,
)
return
except Exception as exp:
print(f"Could not load extension {module_name} due to {exp}", flush=True)
except Exception as exc:
print(f"Could not load extension {module_name} due to {exc}", flush=True)
return
def dynamic_import_from_src(src, star_import=False):
def dynamic_import_from_src(src: Union[str, Path], star_import=False):
my_py_files = get_py_files(src)
for py_file in my_py_files:
module_name = path.split(py_file)[-1][:-3]
module_name = Path(py_file).stem
print(f"Importing {module_name} extension...", flush=True)
imported_module = dynamic_import(module_name, py_file)
if imported_module != None:

View File

@ -1,11 +1,15 @@
from modules.database import col_photos
from pathlib import Path
from typing import Any, List, Mapping, Union
import cv2
import numpy as np
from numpy.typing import NDArray
from scipy import spatial
import cv2
from modules.database import col_photos
def hash_array_to_hash_hex(hash_array):
def hash_array_to_hash_hex(hash_array) -> str:
# convert hash array of 0 or 1 to hash string in hex
hash_array = np.array(hash_array, dtype=np.uint8)
hash_str = "".join(str(i) for i in 1 * hash_array.flatten())
@ -16,18 +20,18 @@ def hash_hex_to_hash_array(hash_hex) -> NDArray:
# convert hash string in hex to hash values of 0 or 1
hash_str = int(hash_hex, 16)
array_str = bin(hash_str)[2:]
return np.array([i for i in array_str], dtype=np.float32)
return np.array(list(array_str), dtype=np.float32)
def get_duplicates_cache(album: str) -> dict:
output = {}
for photo in col_photos.find({"album": album}):
output[photo["filename"]] = [photo["_id"].__str__(), photo["hash"]]
return output
async def get_duplicates_cache(album: str) -> Mapping[str, Any]:
return {
photo["filename"]: [photo["_id"].__str__(), photo["hash"]]
async for photo in col_photos.find({"album": album})
}
async def get_phash(filepath: str) -> str:
img = cv2.imread(filepath)
async def get_phash(filepath: Union[str, Path]) -> str:
img = cv2.imread(str(filepath))
# resize image and convert to gray scale
img = cv2.resize(img, (64, 64))
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
@ -48,14 +52,14 @@ async def get_phash(filepath: str) -> str:
return hash_array_to_hash_hex(dct_block.flatten())
async def get_duplicates(hash: str, album: str) -> list:
async def get_duplicates(hash_string: str, album: str) -> List[Mapping[str, Any]]:
duplicates = []
cache = get_duplicates_cache(album)
for image_name in cache.keys():
cache = await get_duplicates_cache(album)
for image_name, image_object in cache.items():
try:
distance = spatial.distance.hamming(
hash_hex_to_hash_array(cache[image_name][1]),
hash_hex_to_hash_array(hash),
hash_hex_to_hash_array(hash_string),
)
except ValueError:
continue

View File

@ -1,7 +1,11 @@
import logging
from smtplib import SMTP, SMTP_SSL
from traceback import print_exc
from ssl import create_default_context
from modules.utils import configGet, logWrite
from traceback import print_exc
from modules.utils import configGet
logger = logging.getLogger(__name__)
try:
if configGet("use_ssl", "mailer", "smtp") is True:
@ -9,7 +13,7 @@ try:
configGet("host", "mailer", "smtp"),
configGet("port", "mailer", "smtp"),
)
logWrite(f"Initialized SMTP SSL connection")
logger.info("Initialized SMTP SSL connection")
elif configGet("use_tls", "mailer", "smtp") is True:
mail_sender = SMTP(
configGet("host", "mailer", "smtp"),
@ -17,21 +21,21 @@ try:
)
mail_sender.starttls(context=create_default_context())
mail_sender.ehlo()
logWrite(f"Initialized SMTP TLS connection")
logger.info("Initialized SMTP TLS connection")
else:
mail_sender = SMTP(
configGet("host", "mailer", "smtp"), configGet("port", "mailer", "smtp")
)
mail_sender.ehlo()
logWrite(f"Initialized SMTP connection")
except Exception as exp:
logWrite(f"Could not initialize SMTP connection to: {exp}")
logger.info("Initialized SMTP connection")
except Exception as exc:
logger.error("Could not initialize SMTP connection to: %s", exc)
print_exc()
try:
mail_sender.login(
configGet("login", "mailer", "smtp"), configGet("password", "mailer", "smtp")
)
logWrite(f"Successfully initialized mailer")
except Exception as exp:
logWrite(f"Could not login into provided SMTP account due to: {exp}")
logger.info("Successfully initialized mailer")
except Exception as exc:
logger.error("Could not login into provided SMTP account due to: %s", exc)

23
modules/migrator.py Normal file
View File

@ -0,0 +1,23 @@
from typing import Any, Mapping
from mongodb_migrations.cli import MigrationManager
from mongodb_migrations.config import Configuration
from modules.utils import configGet
def migrate_database() -> None:
"""Apply migrations from folder `migrations/` to the database"""
db_config: Mapping[str, Any] = configGet("database")
manager_config = Configuration(
{
"mongo_host": db_config["host"],
"mongo_port": db_config["port"],
"mongo_database": db_config["name"],
"mongo_username": db_config["user"],
"mongo_password": db_config["password"],
}
)
manager = MigrationManager(manager_config)
manager.run()

View File

@ -1,19 +1,34 @@
from datetime import datetime, timedelta, timezone
from os import getenv
from typing import List, Union
from modules.database import col_users
from fastapi import Depends, HTTPException, Security, status
from fastapi.security import (
OAuth2PasswordBearer,
SecurityScopes,
)
from fastapi.security import OAuth2PasswordBearer, SecurityScopes
from jose import JWTError, jwt
from passlib.context import CryptContext
from pydantic import BaseModel, ValidationError
from modules.database import col_users
from modules.utils import configGet
try:
configGet("secret")
except KeyError as exc:
raise KeyError(
"PhotosAPI secret is not set. Secret key handling has changed in PhotosAPI 0.6.0, so you need to add the config key 'secret' to your config file."
) from exc
if configGet("secret") == "" and getenv("PHOTOSAPI_SECRET") is None:
raise KeyError(
"PhotosAPI secret is not set. Set the config key 'secret' or provide the environment variable 'PHOTOSAPI_SECRET' containing a secret string."
)
SECRET_KEY = (
getenv("PHOTOSAPI_SECRET")
if getenv("PHOTOSAPI_SECRET") is not None
else configGet("secret")
)
with open("secret_key", "r", encoding="utf-8") as f:
SECRET_KEY = f.read()
ALGORITHM = "HS256"
ACCESS_TOKEN_EXPIRE_DAYS = 180
@ -31,6 +46,7 @@ class TokenData(BaseModel):
class User(BaseModel):
user: str
email: Union[str, None] = None
quota: Union[int, None] = None
disabled: Union[bool, None] = None
@ -57,49 +73,58 @@ oauth2_scheme = OAuth2PasswordBearer(
)
def verify_password(plain_password, hashed_password):
def verify_password(plain_password, hashed_password) -> bool:
return pwd_context.verify(plain_password, hashed_password)
def get_password_hash(password):
def get_password_hash(password) -> str:
return pwd_context.hash(password)
def get_user(user: str):
found_user = col_users.find_one({"user": user})
async def get_user(user: str) -> UserInDB:
found_user = await col_users.find_one({"user": user})
if found_user is None:
raise RuntimeError(f"User {user} does not exist")
return UserInDB(
user=found_user["user"],
email=found_user["email"],
quota=found_user["quota"]
if found_user["quota"] is not None
else configGet("default_user_quota"),
disabled=found_user["disabled"],
hash=found_user["hash"],
)
def authenticate_user(user_name: str, password: str):
user = get_user(user_name)
if not user:
async def authenticate_user(user_name: str, password: str) -> Union[UserInDB, bool]:
if user := await get_user(user_name):
return user if verify_password(password, user.hash) else False
else:
return False
if not verify_password(password, user.hash):
return False
return user
def create_access_token(data: dict, expires_delta: Union[timedelta, None] = None):
def create_access_token(
data: dict, expires_delta: Union[timedelta, None] = None
) -> str:
to_encode = data.copy()
if expires_delta:
expire = datetime.now(tz=timezone.utc) + expires_delta
else:
expire = datetime.now(tz=timezone.utc) + timedelta(
days=ACCESS_TOKEN_EXPIRE_DAYS
)
to_encode.update({"exp": expire})
encoded_jwt = jwt.encode(to_encode, SECRET_KEY, algorithm=ALGORITHM)
return encoded_jwt
to_encode["exp"] = expire
return jwt.encode(to_encode, SECRET_KEY, algorithm=ALGORITHM)
async def get_current_user(
security_scopes: SecurityScopes, token: str = Depends(oauth2_scheme)
):
) -> UserInDB:
if security_scopes.scopes:
authenticate_value = f'Bearer scope="{security_scopes.scope_str}"'
else:
@ -114,16 +139,18 @@ async def get_current_user(
try:
payload = jwt.decode(token, SECRET_KEY, algorithms=[ALGORITHM])
user: str = payload.get("sub")
if user is None:
raise credentials_exception
token_scopes = payload.get("scopes", [])
token_data = TokenData(scopes=token_scopes, user=user)
except (JWTError, ValidationError):
raise credentials_exception
except (JWTError, ValidationError) as exc:
raise credentials_exception from exc
user = get_user(user=token_data.user)
user_record = await get_user(user=token_data.user)
if user is None:
if user_record is None:
raise credentials_exception
for scope in security_scopes.scopes:
@ -133,7 +160,8 @@ async def get_current_user(
detail="Not enough permissions",
headers={"WWW-Authenticate": authenticate_value},
)
return user
return user_record
async def get_current_active_user(
@ -141,4 +169,5 @@ async def get_current_active_user(
):
if current_user.disabled:
raise HTTPException(status_code=400, detail="Inactive user")
return current_user

View File

@ -1,20 +1,18 @@
import logging
from pathlib import Path
from traceback import format_exc
from typing import Any, Union
from ujson import loads, dumps, JSONDecodeError
from traceback import print_exc
from ujson import JSONDecodeError, dumps, loads
logger = logging.getLogger(__name__)
# Print to stdout and then to log
def logWrite(message: str, debug: bool = False) -> None:
# save to log file and rotation is to be done
# logAppend(f'{message}', debug=debug)
print(f"{message}", flush=True)
def jsonLoad(filepath: str) -> Any:
def jsonLoad(filepath: Union[str, Path]) -> Any:
"""Load json file
### Args:
* filepath (`str`): Path to input file
* filepath (`Union[str, Path]`): Path to input file
### Returns:
* `Any`: Some json deserializable
@ -23,32 +21,36 @@ def jsonLoad(filepath: str) -> Any:
try:
output = loads(file.read())
except JSONDecodeError:
logWrite(
f"Could not load json file {filepath}: file seems to be incorrect!\n{print_exc()}"
logger.error(
"Could not load json file %s: file seems to be incorrect!\n%s",
filepath,
format_exc(),
)
raise
except FileNotFoundError:
logWrite(
f"Could not load json file {filepath}: file does not seem to exist!\n{print_exc()}"
logger.error(
"Could not load json file %s: file does not seem to exist!\n%s",
filepath,
format_exc(),
)
raise
file.close()
return output
def jsonSave(contents: Union[list, dict], filepath: str) -> None:
def jsonSave(contents: Union[list, dict], filepath: Union[str, Path]) -> None:
"""Save contents into json file
### Args:
* contents (`Union[list, dict]`): Some json serializable
* filepath (`str`): Path to output file
* filepath (`Union[str, Path]`): Path to output file
"""
try:
with open(filepath, "w", encoding="utf8") as file:
file.write(dumps(contents, ensure_ascii=False, indent=4))
file.close()
except Exception as exp:
logWrite(f"Could not save json file {filepath}: {exp}\n{print_exc()}")
except Exception as exc:
logger.error("Could not save json file %s: %s\n%s", filepath, exc, format_exc())
return
@ -62,7 +64,7 @@ def configGet(key: str, *args: str) -> Any:
### Returns:
* `Any`: Value of provided key
"""
this_dict = jsonLoad("config.json")
this_dict = jsonLoad(Path("config.json"))
this_key = this_dict
for dict_key in args:
this_key = this_key[dict_key]

View File

@ -1,11 +1,22 @@
from os import makedirs, path
from modules.app import app
from modules.utils import *
from modules.scheduler import scheduler
from modules.extensions_loader import dynamic_import_from_src
import logging
from argparse import ArgumentParser
from os import makedirs
from pathlib import Path
from fastapi.responses import FileResponse
makedirs(path.join("data", "users"), exist_ok=True)
from modules.app import app
from modules.extensions_loader import dynamic_import_from_src
from modules.migrator import migrate_database
from modules.scheduler import scheduler
makedirs(Path("data/users"), exist_ok=True)
logging.basicConfig(
level=logging.INFO,
format="%(name)s.%(funcName)s | %(levelname)s | %(message)s",
datefmt="[%X]",
)
@app.get("/favicon.ico", response_class=FileResponse, include_in_schema=False)
@ -18,3 +29,15 @@ dynamic_import_from_src("extensions", star_import=True)
# =================================================================================
scheduler.start()
parser = ArgumentParser(
prog="PhotosAPI",
description="Small and simple API server for saving photos and videos.",
)
parser.add_argument("--migrate", action="store_true")
args, unknown = parser.parse_known_args()
if args.migrate:
migrate_database()

View File

@ -1,10 +1,14 @@
fastapi[all]~=0.94.0
pymongo==4.3.3
ujson~=5.7.0
scipy~=1.10.1
python-magic~=0.4.27
opencv-python~=4.7.0.72
python-jose[cryptography]~=3.3.0
passlib~=1.7.4
aiofiles==23.2.1
apscheduler~=3.10.1
exif==1.5.0
exif==1.6.0
fastapi[all]==0.104.1
mongodb-migrations==1.3.0
opencv-python~=4.8.1.78
passlib~=1.7.4
pymongo>=4.3.3
python-jose[cryptography]~=3.3.0
python-magic~=0.4.27
scipy~=1.11.0
ujson~=5.8.0
--extra-index-url https://git.end-play.xyz/api/packages/profitroll/pypi/simple
async_pymongo==0.1.4