Compare commits
No commits in common. "main" and "konfluks-renaming" have entirely different histories.
main
...
konfluks-r
37
README.md
37
README.md
@ -1,4 +1,4 @@
|
|||||||

|

|
||||||
|
|
||||||
# Konfluks
|
# Konfluks
|
||||||
|
|
||||||
@ -6,7 +6,7 @@ A drainage basin is a geographical feature that collects all precipitation in an
|
|||||||
|
|
||||||
Specifically, Konfluks turns Peertube videos, iCal calendar events, other websites through their RSS and OPDS feeds and Mastodon posts under a hashtag in to Hugo page bundles. This allows one to publish from diverse sources to a single stream.
|
Specifically, Konfluks turns Peertube videos, iCal calendar events, other websites through their RSS and OPDS feeds and Mastodon posts under a hashtag in to Hugo page bundles. This allows one to publish from diverse sources to a single stream.
|
||||||
|
|
||||||
Konfluks was first made by [Roel Roscam Abbing](https://test.roelof.info/) as part of [lumbung.space](https://lumbung.space), together with [ruangrupa](https://ruangrupa.id) and [Autonomic](https://autonomic.zone).
|
Konfluks was first made by Roel Roscam Abbing as part of [lumbung.space](https://lumbung.space), together with ruangrupa and Autonomic.
|
||||||
|
|
||||||
## Philosophy
|
## Philosophy
|
||||||
|
|
||||||
@ -22,26 +22,24 @@ Konfluks is extendable, a work in progress and a messy undertaking.
|
|||||||
|
|
||||||
## High-level overview
|
## High-level overview
|
||||||
|
|
||||||
Konfluks consists of different Python scripts which each poll a particular service, say, a [Peertube](https://joinpeertube.org) server, to download information and convert it in to [Hugo Page Bundles](https://gohugo.io/content-management/page-bundles/)
|
Konfluks consists of different Python scripts which each poll a particular service, say, a Peertube server, to download information and convert it in to [Hugo Page Bundles](https://gohugo.io/content-management/page-bundles/)
|
||||||
|
|
||||||
Each script part of Konfluks will essentially to the following:
|
Each script part of Konfluks will essentially to the following:
|
||||||
|
|
||||||
* Parse a source and request posts/updates/videos/a feed
|
* Parse a source and request posts/updates/videos/a feed
|
||||||
* Taking care of publish ques
|
* Taking care of publish ques
|
||||||
|
|
||||||
* Create a Hugo post for each item returned, by:
|
* Create a Hugo post for each item returned, by:
|
||||||
* Making a folder per post in the `output` directory
|
* Making a folder per post in the `output` directory
|
||||||
* Formatting post metadata as [Hugo Post Frontmatter](https://gohugo.io/content-management/front-matter/) in a file called `index.md`
|
* Formatting post metadata as [Hugo Post Frontmatter](https://gohugo.io/content-management/front-matter/) in a file called `index.md`
|
||||||
* Grabbing local copies of media and saving them in the post folder
|
* Grabbing local copies of media and saving them in the post folder
|
||||||
* Adding the post content to `index.md`
|
* Adding the post content to `index.md`
|
||||||
* According to jinja2 templates (see `konfluks/templates/`)
|
* According to jinja2 templates (see `Konfluks/templates/`)
|
||||||
|
|
||||||
The page bundles created, where possible, are given human friendly names.
|
The page bundles created, where possible, are given human friendly names.
|
||||||
|
|
||||||
Here is a typical output structure:
|
|
||||||
|
|
||||||
|
Here is a typical output structure:
|
||||||
```
|
```
|
||||||
user@server: ~/konfluks/output: tree tv/
|
user@server: ~/Konfluks/output: tree tv/
|
||||||
tv/
|
tv/
|
||||||
├── forum-27an-mother-earth-353f93f3-5fee-49d6-b71d-8aef753f7041
|
├── forum-27an-mother-earth-353f93f3-5fee-49d6-b71d-8aef753f7041
|
||||||
│ ├── 86ccae63-3df9-443c-91f3-edce146055db.jpg
|
│ ├── 86ccae63-3df9-443c-91f3-edce146055db.jpg
|
||||||
@ -54,6 +52,8 @@ Here is a typical output structure:
|
|||||||
│ └── index.md
|
│ └── index.md
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
## Hacking
|
## Hacking
|
||||||
|
|
||||||
Install [poetry](https://python-poetry.org/docs/#osx--linux--bashonwindows-install-instructions):
|
Install [poetry](https://python-poetry.org/docs/#osx--linux--bashonwindows-install-instructions):
|
||||||
@ -62,20 +62,31 @@ Install [poetry](https://python-poetry.org/docs/#osx--linux--bashonwindows-insta
|
|||||||
curl -sSL https://raw.githubusercontent.com/python-poetry/poetry/master/get-poetry.py | python -
|
curl -sSL https://raw.githubusercontent.com/python-poetry/poetry/master/get-poetry.py | python -
|
||||||
```
|
```
|
||||||
|
|
||||||
We use Poetry because it locks the dependencies all the way down and makes it easier to manage installation & maintenance in the long-term. Then install the dependencies & have them managed by Poetry:
|
We use Poetry because it locks the dependencies all the way down and makes it
|
||||||
|
easier to manage installation & maintenance in the long-term. Then install the
|
||||||
|
dependencies & have them managed by Poetry:
|
||||||
|
|
||||||
```
|
```
|
||||||
poetry install
|
poetry install
|
||||||
```
|
```
|
||||||
|
|
||||||
Each script requires some environment variables to run, you can see the latest deployment configuration over [here](https://git.autonomic.zone/ruangrupa/lumbung.space/src/branch/main/compose.yml), look for the values under the `environment: ...` stanza.
|
Each script requires some environment variables to run, you can see the latest
|
||||||
|
deployment configuration over
|
||||||
|
[here](https://git.autonomic.zone/ruangrupa/lumbung.space/src/branch/main/compose.yml),
|
||||||
|
look for the values under the `environment: ...` stanza.
|
||||||
|
|
||||||
All scripts have an entrypoint described in the [`pypoetry.toml`](./pyproject.toml) which you can run via `poetry run ...`. For example, if you want to run the [`konfluks/video.py`](./konfluks/video.py) script, you'd do:
|
All scripts have an entrypoint described in the
|
||||||
|
[`pypoetry.toml`](https://git.autonomic.zone/ruangrupa/lumbunglib/src/commit/40bf9416b8792c08683ad8ac878093c7ef1b2f5d/pyproject.toml#L27-L31)
|
||||||
|
which you can run via `poetry run ...`. For example, if you want to run the
|
||||||
|
[`lumbunglib/video.py`](./lumbunglib/video.py) script, you'd do:
|
||||||
|
|
||||||
```
|
```
|
||||||
mkdir -p testdir
|
mkdir -p testdir
|
||||||
export OUTPUT_DIR=/testdir
|
export OUTPUT_DIR=/testdir
|
||||||
poetry run konfluks-vid
|
poetry run lumbunglib-vid
|
||||||
```
|
```
|
||||||
|
|
||||||
Run `poetry run poetry2setup > setup.py` if updating the poetry dependencies. This allows us to run `pip install .` in the deployment and Pip will understand that it is just a regular Python package. If adding a new cli command, extend `pyproject.toml` with a new `[tool.poetry.scripts]` entry.
|
Run `poetry run poetry2setup > setup.py` if updating the poetry dependencies.
|
||||||
|
This allows us to run `pip install .` in the deployment and Pip will understand
|
||||||
|
that it is just a regular Python package. If adding a new cli command, extend
|
||||||
|
`pyproject.toml` with a new `[tool.poetry.scripts]` entry.
|
||||||
|
Before Width: | Height: | Size: 29 KiB After Width: | Height: | Size: 29 KiB |
@ -138,9 +138,9 @@ def create_event_post(post_dir, event):
|
|||||||
for img in event_metadata["images"]:
|
for img in event_metadata["images"]:
|
||||||
|
|
||||||
# parse img url to safe local image name
|
# parse img url to safe local image name
|
||||||
img_name = os.path.basename(img)
|
img_name = img.split("/")[-1]
|
||||||
fn, ext = os.path.splitext(img_name)
|
fn, ext = img_name.split(".")
|
||||||
img_name = slugify(fn) + '.' + ext
|
img_name = slugify(fn) + "." + ext
|
||||||
|
|
||||||
local_image = os.path.join(post_dir, img_name)
|
local_image = os.path.join(post_dir, img_name)
|
||||||
|
|
@ -155,11 +155,8 @@ def parse_enclosures(post_dir, entry):
|
|||||||
if "type" in e:
|
if "type" in e:
|
||||||
print("found enclosed media", e.type)
|
print("found enclosed media", e.type)
|
||||||
if "image/" in e.type:
|
if "image/" in e.type:
|
||||||
if not os.path.exists(post_dir): #this might be redundant with create_post
|
|
||||||
os.makedirs(post_dir)
|
|
||||||
featured_image = grab_media(post_dir, e.href)
|
featured_image = grab_media(post_dir, e.href)
|
||||||
media_item = urlparse(e.href).path.split('/')[-1]
|
entry["featured_image"] = featured_image
|
||||||
entry["featured_image"] = media_item
|
|
||||||
else:
|
else:
|
||||||
print("FIXME:ignoring enclosed", e.type)
|
print("FIXME:ignoring enclosed", e.type)
|
||||||
return entry
|
return entry
|
||||||
@ -376,16 +373,16 @@ def main():
|
|||||||
|
|
||||||
data = grab_feed(feed_url)
|
data = grab_feed(feed_url)
|
||||||
|
|
||||||
if data: #whenever we get a 200
|
if data:
|
||||||
if data.feed: #only if it is an actual feed
|
|
||||||
opds_feed = False
|
opds_feed = False
|
||||||
if 'links' in data.feed:
|
|
||||||
for i in data.feed['links']:
|
for i in data.feed['links']:
|
||||||
if i['rel'] == 'self':
|
if i['rel'] == 'self':
|
||||||
if 'opds' in i['type']:
|
if 'opds' in i['type']:
|
||||||
opds_feed = True
|
opds_feed = True
|
||||||
print("OPDS type feed!")
|
print("OPDS type feed!")
|
||||||
|
|
||||||
|
|
||||||
for entry in data.entries:
|
for entry in data.entries:
|
||||||
# if 'tags' in entry:
|
# if 'tags' in entry:
|
||||||
# for tag in entry.tags:
|
# for tag in entry.tags:
|
||||||
@ -428,14 +425,10 @@ def main():
|
|||||||
post_name
|
post_name
|
||||||
) # create list of posts which have not been returned by the feed
|
) # create list of posts which have not been returned by the feed
|
||||||
|
|
||||||
|
|
||||||
for post in existing_posts:
|
for post in existing_posts:
|
||||||
# remove blog posts no longer returned by the RSS feed
|
# remove blog posts no longer returned by the RSS feed
|
||||||
post_dir = os.path.join(output_dir, feed_name, post)
|
print("deleted", post)
|
||||||
shutil.rmtree(post_dir)
|
shutil.rmtree(os.path.join(feed_dir, slugify(post)))
|
||||||
print("deleted", post_dir)
|
|
||||||
else:
|
|
||||||
print(feed_url, "is not or no longer a feed!")
|
|
||||||
|
|
||||||
end = time.time()
|
end = time.time()
|
||||||
|
|
||||||
|
@ -23,7 +23,6 @@ hashtags = [
|
|||||||
"ruruhaus",
|
"ruruhaus",
|
||||||
"offbeatentrack_kassel",
|
"offbeatentrack_kassel",
|
||||||
"lumbungofpublishers",
|
"lumbungofpublishers",
|
||||||
"lumbungkiosproducts",
|
|
||||||
]
|
]
|
||||||
|
|
||||||
|
|
||||||
@ -60,21 +59,6 @@ def download_media(post_directory, media_attachments):
|
|||||||
with open(os.path.join(post_directory, image), "wb") as img_file:
|
with open(os.path.join(post_directory, image), "wb") as img_file:
|
||||||
shutil.copyfileobj(response.raw, img_file)
|
shutil.copyfileobj(response.raw, img_file)
|
||||||
print("Downloaded cover image", image)
|
print("Downloaded cover image", image)
|
||||||
elif item["type"] == "video":
|
|
||||||
video = localize_media_url(item["url"])
|
|
||||||
if not os.path.exists(os.path.join(post_directory, video)):
|
|
||||||
# download video file
|
|
||||||
response = requests.get(item["url"], stream=True)
|
|
||||||
with open(os.path.join(post_directory, video), "wb") as video_file:
|
|
||||||
shutil.copyfileobj(response.raw, video_file)
|
|
||||||
print("Downloaded video in post", video)
|
|
||||||
if not os.path.exists(os.path.join(post_directory, "thumbnail.png")):
|
|
||||||
#download video preview
|
|
||||||
response = requests.get(item["preview_url"], stream=True)
|
|
||||||
with open(os.path.join(post_directory, "thumbnail.png"), "wb") as thumbnail:
|
|
||||||
shutil.copyfileobj(response.raw, thumbnail)
|
|
||||||
print("Downloaded thumbnail for", video)
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
def create_post(post_directory, post_metadata):
|
def create_post(post_directory, post_metadata):
|
||||||
@ -93,6 +77,7 @@ def create_post(post_directory, post_metadata):
|
|||||||
post_metadata["account"]["display_name"] = name
|
post_metadata["account"]["display_name"] = name
|
||||||
env.filters["localize_media_url"] = localize_media_url
|
env.filters["localize_media_url"] = localize_media_url
|
||||||
env.filters["filter_mastodon_urls"] = filter_mastodon_urls
|
env.filters["filter_mastodon_urls"] = filter_mastodon_urls
|
||||||
|
|
||||||
template = env.get_template("hashtag.md")
|
template = env.get_template("hashtag.md")
|
||||||
|
|
||||||
with open(os.path.join(post_directory, "index.html"), "w") as f:
|
with open(os.path.join(post_directory, "index.html"), "w") as f:
|
||||||
|
@ -2,7 +2,7 @@
|
|||||||
title: "{{ event.name }}"
|
title: "{{ event.name }}"
|
||||||
date: "{{ event.begin }}" #2021-06-10T10:46:33+02:00
|
date: "{{ event.begin }}" #2021-06-10T10:46:33+02:00
|
||||||
draft: false
|
draft: false
|
||||||
source: "lumbung calendar"
|
categories: "calendar"
|
||||||
event_begin: "{{ event.begin }}"
|
event_begin: "{{ event.begin }}"
|
||||||
event_end: "{{ event.end }}"
|
event_end: "{{ event.end }}"
|
||||||
duration: "{{ event.duration }}"
|
duration: "{{ event.duration }}"
|
||||||
|
@ -3,11 +3,11 @@ title: "{{ frontmatter.title }}"
|
|||||||
date: "{{ frontmatter.date }}" #2021-06-10T10:46:33+02:00
|
date: "{{ frontmatter.date }}" #2021-06-10T10:46:33+02:00
|
||||||
draft: false
|
draft: false
|
||||||
summary: "{{ frontmatter.summary }}"
|
summary: "{{ frontmatter.summary }}"
|
||||||
contributors: {% if frontmatter.author %} ["{{ frontmatter.author }}"] {% endif %}
|
authors: {% if frontmatter.author %} ["{{ frontmatter.author }}"] {% endif %}
|
||||||
original_link: "{{ frontmatter.original_link }}"
|
original_link: "{{ frontmatter.original_link }}"
|
||||||
feed_name: "{{ frontmatter.feed_name}}"
|
feed_name: "{{ frontmatter.feed_name}}"
|
||||||
card_type: "{{ frontmatter.card_type }}"
|
categories: ["{{ frontmatter.card_type }}", "{{ frontmatter.feed_name}}"]
|
||||||
sources: ["{{ frontmatter.feed_name}}"]
|
contributors: ["{{ frontmatter.feed_name}}"]
|
||||||
tags: {{ frontmatter.tags }}
|
tags: {{ frontmatter.tags }}
|
||||||
{% if frontmatter.featured_image %}featured_image: "{{frontmatter.featured_image}}"{% endif %}
|
{% if frontmatter.featured_image %}featured_image: "{{frontmatter.featured_image}}"{% endif %}
|
||||||
---
|
---
|
||||||
|
@ -1,27 +1,17 @@
|
|||||||
---
|
---
|
||||||
date: {{ post_metadata.created_at }} #2021-06-10T10:46:33+02:00
|
date: {{ post_metadata.created_at }} #2021-06-10T10:46:33+02:00
|
||||||
draft: false
|
draft: false
|
||||||
contributors: ["{{ post_metadata.account.display_name }}"]
|
authors: ["{{ post_metadata.account.display_name }}"]
|
||||||
|
contributors: ["{{ post_metadata.account.acct}}"]
|
||||||
avatar: {{ post_metadata.account.avatar }}
|
avatar: {{ post_metadata.account.avatar }}
|
||||||
|
categories: ["shouts"]
|
||||||
|
images: [{% for i in post_metadata.media_attachments %} {{ i.url }}, {% endfor %}]
|
||||||
title: {{ post_metadata.account.display_name }}
|
title: {{ post_metadata.account.display_name }}
|
||||||
tags: [{% for i in post_metadata.tags %} "{{ i.name }}", {% endfor %}]
|
tags: [{% for i in post_metadata.tags %} "{{ i.name }}", {% endfor %}]
|
||||||
images: [{% for i in post_metadata.media_attachments %}{% if i.type == "image" %}"{{ i.url | localize_media_url }}", {%endif%}{% endfor %}]
|
|
||||||
videos: [{% for i in post_metadata.media_attachments %}{% if i.type == "video" %}"{{ i.url | localize_media_url }}", {%endif%}{% endfor %}]
|
|
||||||
---
|
---
|
||||||
|
|
||||||
{% for item in post_metadata.media_attachments %}
|
{% for item in post_metadata.media_attachments %}
|
||||||
{% if item.type == "image" %}
|
|
||||||
<img src="{{item.url | localize_media_url }}" alt="{{item.description}}">
|
<img src="{{item.url | localize_media_url }}" alt="{{item.description}}">
|
||||||
{% endif %}
|
|
||||||
{% endfor %}
|
|
||||||
|
|
||||||
{% for item in post_metadata.media_attachments %}
|
|
||||||
{% if item.type == "video" %}
|
|
||||||
<video controls width="540px" preload="none" poster="thumbnail.png">
|
|
||||||
<source src="{{item.url | localize_media_url }}" type="video/mp4">
|
|
||||||
{% if item.description %}{{item.description}}{% endif %}
|
|
||||||
</video>
|
|
||||||
{% endif %}
|
|
||||||
{% endfor %}
|
{% endfor %}
|
||||||
|
|
||||||
{{ post_metadata.content | filter_mastodon_urls }}
|
{{ post_metadata.content | filter_mastodon_urls }}
|
||||||
|
@ -1,14 +0,0 @@
|
|||||||
---
|
|
||||||
title: "{{ frontmatter.title }}"
|
|
||||||
date: "{{ frontmatter.date }}" #2021-06-10T10:46:33+02:00
|
|
||||||
draft: false
|
|
||||||
summary: "{{ frontmatter.summary }}"
|
|
||||||
contributors: {% if frontmatter.author %} ["{{ frontmatter.author }}"] {% endif %}
|
|
||||||
original_link: "{{ frontmatter.original_link }}"
|
|
||||||
feed_name: "{{ frontmatter.feed_name}}"
|
|
||||||
sources: ["timeline", "{{ frontmatter.feed_name}}"]
|
|
||||||
timelines: {{ frontmatter.timelines }}
|
|
||||||
hidden: true
|
|
||||||
---
|
|
||||||
|
|
||||||
{{ content }}
|
|
@ -9,7 +9,7 @@ channel_url: "{{ v.channel.url }}"
|
|||||||
contributors: ["{{ v.account.display_name }}"]
|
contributors: ["{{ v.account.display_name }}"]
|
||||||
preview_image: "{{ preview_image }}"
|
preview_image: "{{ preview_image }}"
|
||||||
images: ["./{{ preview_image }}"]
|
images: ["./{{ preview_image }}"]
|
||||||
sources: ["{{ v.channel.display_name }}"]
|
categories: ["tv","{{ v.channel.display_name }}"]
|
||||||
is_live: {{ v.is_live }}
|
is_live: {{ v.is_live }}
|
||||||
---
|
---
|
||||||
|
|
||||||
|
@ -1,381 +0,0 @@
|
|||||||
import os
|
|
||||||
import shutil
|
|
||||||
import time
|
|
||||||
from hashlib import md5
|
|
||||||
from ast import literal_eval as make_tuple
|
|
||||||
from pathlib import Path
|
|
||||||
from urllib.parse import urlparse
|
|
||||||
from re import sub
|
|
||||||
|
|
||||||
import arrow
|
|
||||||
import feedparser
|
|
||||||
import jinja2
|
|
||||||
import requests
|
|
||||||
from bs4 import BeautifulSoup
|
|
||||||
from slugify import slugify
|
|
||||||
from re import compile as re_compile
|
|
||||||
yamlre = re_compile('"')
|
|
||||||
|
|
||||||
|
|
||||||
def write_etag(feed_name, feed_data):
|
|
||||||
"""
|
|
||||||
save timestamp of when feed was last modified
|
|
||||||
"""
|
|
||||||
etag = ""
|
|
||||||
modified = ""
|
|
||||||
|
|
||||||
if "etag" in feed_data:
|
|
||||||
etag = feed_data.etag
|
|
||||||
if "modified" in feed_data:
|
|
||||||
modified = feed_data.modified
|
|
||||||
|
|
||||||
if etag or modified:
|
|
||||||
with open(os.path.join("etags", feed_name + ".txt"), "w") as f:
|
|
||||||
f.write(str((etag, modified)))
|
|
||||||
|
|
||||||
|
|
||||||
def get_etag(feed_name):
|
|
||||||
"""
|
|
||||||
return timestamp of when feed was last modified
|
|
||||||
"""
|
|
||||||
fn = os.path.join("etags", feed_name + ".txt")
|
|
||||||
etag = ""
|
|
||||||
modified = ""
|
|
||||||
|
|
||||||
if os.path.exists(fn):
|
|
||||||
etag, modified = make_tuple(open(fn, "r").read())
|
|
||||||
|
|
||||||
return etag, modified
|
|
||||||
|
|
||||||
|
|
||||||
def create_frontmatter(entry):
|
|
||||||
"""
|
|
||||||
parse RSS metadata and return as frontmatter
|
|
||||||
"""
|
|
||||||
if 'published' in entry:
|
|
||||||
published = entry.published_parsed
|
|
||||||
if 'updated' in entry:
|
|
||||||
published = entry.updated_parsed
|
|
||||||
|
|
||||||
published = arrow.get(published)
|
|
||||||
|
|
||||||
if 'author' in entry:
|
|
||||||
author = entry.author
|
|
||||||
else:
|
|
||||||
author = ''
|
|
||||||
|
|
||||||
if 'authors' in entry:
|
|
||||||
authors = []
|
|
||||||
for a in entry.authors:
|
|
||||||
authors.append(a['name'])
|
|
||||||
|
|
||||||
if 'summary' in entry:
|
|
||||||
summary = entry.summary
|
|
||||||
else:
|
|
||||||
summary = ''
|
|
||||||
|
|
||||||
if 'publisher' in entry:
|
|
||||||
publisher = entry.publisher
|
|
||||||
else:
|
|
||||||
publisher = ''
|
|
||||||
|
|
||||||
tags = []
|
|
||||||
if 'tags' in entry:
|
|
||||||
#TODO finish categories
|
|
||||||
for t in entry.tags:
|
|
||||||
tags.append(t['term'])
|
|
||||||
|
|
||||||
frontmatter = {
|
|
||||||
'title':entry.title,
|
|
||||||
'date': published.format(),
|
|
||||||
'summary': '',
|
|
||||||
'author': author,
|
|
||||||
'original_link': entry.link,
|
|
||||||
'feed_name': entry['feed_name'],
|
|
||||||
'timelines': str(tags),
|
|
||||||
}
|
|
||||||
|
|
||||||
return frontmatter
|
|
||||||
|
|
||||||
def sanitize_yaml (frontmatter):
|
|
||||||
"""
|
|
||||||
Escapes any occurences of double quotes
|
|
||||||
in any of the frontmatter fields
|
|
||||||
See: https://docs.octoprint.org/en/master/configuration/yaml.html#interesting-data-types
|
|
||||||
"""
|
|
||||||
for k, v in frontmatter.items():
|
|
||||||
if type(v) == type([]):
|
|
||||||
#some fields are lists
|
|
||||||
l = []
|
|
||||||
for i in v:
|
|
||||||
i = yamlre.sub('\\"', i)
|
|
||||||
l.append(i)
|
|
||||||
frontmatter[k] = l
|
|
||||||
|
|
||||||
else:
|
|
||||||
v = yamlre.sub('\\"', v)
|
|
||||||
frontmatter[k] = v
|
|
||||||
|
|
||||||
return frontmatter
|
|
||||||
|
|
||||||
|
|
||||||
def create_post(post_dir, entry):
|
|
||||||
"""
|
|
||||||
write hugo post based on RSS entry
|
|
||||||
"""
|
|
||||||
frontmatter = create_frontmatter(entry)
|
|
||||||
|
|
||||||
if not os.path.exists(post_dir):
|
|
||||||
os.makedirs(post_dir)
|
|
||||||
|
|
||||||
if "content" in entry:
|
|
||||||
post_content = entry.content[0].value
|
|
||||||
else:
|
|
||||||
post_content = entry.summary
|
|
||||||
|
|
||||||
parsed_content = parse_posts(post_dir, post_content)
|
|
||||||
|
|
||||||
template_dir = os.path.join(Path(__file__).parent.resolve(), "templates")
|
|
||||||
env = jinja2.Environment(loader=jinja2.FileSystemLoader(template_dir))
|
|
||||||
template = env.get_template("timeline.md")
|
|
||||||
with open(os.path.join(post_dir, "index.html"), "w") as f: # n.b. .html
|
|
||||||
post = template.render(frontmatter=sanitize_yaml(frontmatter), content=parsed_content)
|
|
||||||
f.write(post)
|
|
||||||
print("created post for", entry.title, "({})".format(entry.link))
|
|
||||||
|
|
||||||
|
|
||||||
def grab_media(post_directory, url, prefered_name=None):
|
|
||||||
"""
|
|
||||||
download media linked in post to have local copy
|
|
||||||
if download succeeds return new local path otherwise return url
|
|
||||||
"""
|
|
||||||
media_item = urlparse(url).path.split('/')[-1]
|
|
||||||
|
|
||||||
if prefered_name:
|
|
||||||
media_item = prefered_name
|
|
||||||
|
|
||||||
try:
|
|
||||||
if not os.path.exists(os.path.join(post_directory, media_item)):
|
|
||||||
#TODO: stream is true is a conditional so we could check the headers for things, mimetype etc
|
|
||||||
response = requests.get(url, stream=True)
|
|
||||||
if response.ok:
|
|
||||||
with open(os.path.join(post_directory, media_item), 'wb') as media_file:
|
|
||||||
shutil.copyfileobj(response.raw, media_file)
|
|
||||||
print('Downloaded media item', media_item)
|
|
||||||
return media_item
|
|
||||||
return media_item
|
|
||||||
elif os.path.exists(os.path.join(post_directory, media_item)):
|
|
||||||
return media_item
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
print('Failed to download image', url)
|
|
||||||
print(e)
|
|
||||||
return url
|
|
||||||
|
|
||||||
|
|
||||||
def parse_posts(post_dir, post_content):
|
|
||||||
"""
|
|
||||||
parse the post content to for media items
|
|
||||||
replace foreign image with local copy
|
|
||||||
filter out iframe sources not in allowlist
|
|
||||||
"""
|
|
||||||
soup = BeautifulSoup(post_content, "html.parser")
|
|
||||||
allowed_iframe_sources = ["youtube.com", "vimeo.com", "tv.lumbung.space"]
|
|
||||||
|
|
||||||
for img in soup(["img", "object"]):
|
|
||||||
if img.get("src") != None:
|
|
||||||
local_image = grab_media(post_dir, img["src"])
|
|
||||||
if img["src"] != local_image:
|
|
||||||
img["src"] = local_image
|
|
||||||
|
|
||||||
for iframe in soup(["iframe"]):
|
|
||||||
if not any(source in iframe["src"] for source in allowed_iframe_sources):
|
|
||||||
print("filtered iframe: {}...".format(iframe["src"][:25]))
|
|
||||||
iframe.decompose()
|
|
||||||
return soup.decode()
|
|
||||||
|
|
||||||
|
|
||||||
def grab_feed(feed_url):
|
|
||||||
"""
|
|
||||||
check whether feed has been updated
|
|
||||||
download & return it if it has
|
|
||||||
"""
|
|
||||||
feed_name = urlparse(feed_url).netloc
|
|
||||||
|
|
||||||
etag, modified = get_etag(feed_name)
|
|
||||||
|
|
||||||
try:
|
|
||||||
if modified:
|
|
||||||
data = feedparser.parse(feed_url, modified=modified)
|
|
||||||
elif etag:
|
|
||||||
data = feedparser.parse(feed_url, etag=etag)
|
|
||||||
else:
|
|
||||||
data = feedparser.parse(feed_url)
|
|
||||||
except Exception as e:
|
|
||||||
print("Error grabbing feed")
|
|
||||||
print(feed_name)
|
|
||||||
print(e)
|
|
||||||
return False
|
|
||||||
|
|
||||||
print(data.status, feed_url)
|
|
||||||
if data.status == 200:
|
|
||||||
# 304 means the feed has not been modified since we last checked
|
|
||||||
write_etag(feed_name, data)
|
|
||||||
return data
|
|
||||||
return False
|
|
||||||
|
|
||||||
def create_opds_post(post_dir, entry):
|
|
||||||
"""
|
|
||||||
create a HUGO post based on OPDS entry
|
|
||||||
or update it if the timestamp is newer
|
|
||||||
Downloads the cover & file
|
|
||||||
"""
|
|
||||||
|
|
||||||
frontmatter = create_frontmatter(entry)
|
|
||||||
|
|
||||||
template_dir = os.path.join(Path(__file__).parent.resolve(), "templates")
|
|
||||||
env = jinja2.Environment(loader=jinja2.FileSystemLoader(template_dir))
|
|
||||||
template = env.get_template("feed.md")
|
|
||||||
|
|
||||||
if not os.path.exists(post_dir):
|
|
||||||
os.makedirs(post_dir)
|
|
||||||
|
|
||||||
if os.path.exists(os.path.join(post_dir, '.timestamp')):
|
|
||||||
old_timestamp = open(os.path.join(post_dir, '.timestamp')).read()
|
|
||||||
old_timestamp = arrow.get(float(old_timestamp))
|
|
||||||
current_timestamp = arrow.get(entry['updated_parsed'])
|
|
||||||
|
|
||||||
if current_timestamp > old_timestamp:
|
|
||||||
pass
|
|
||||||
else:
|
|
||||||
print('Book "{}..." already up to date'.format(entry['title'][:32]))
|
|
||||||
return
|
|
||||||
|
|
||||||
for item in entry.links:
|
|
||||||
ft = item['type'].split('/')[-1]
|
|
||||||
fn = item['rel'].split('/')[-1]
|
|
||||||
|
|
||||||
if fn == "acquisition":
|
|
||||||
fn = "publication" #calling the publications acquisition is weird
|
|
||||||
|
|
||||||
prefered_name = "{}-{}.{}".format(fn, slugify(entry['title']), ft)
|
|
||||||
|
|
||||||
grab_media(post_dir, item['href'], prefered_name)
|
|
||||||
|
|
||||||
if "summary" in entry:
|
|
||||||
summary = entry.summary
|
|
||||||
else:
|
|
||||||
summary = ""
|
|
||||||
|
|
||||||
with open(os.path.join(post_dir,'index.md'),'w') as f:
|
|
||||||
post = template.render(frontmatter=sanitize_yaml(frontmatter), content=summary)
|
|
||||||
f.write(post)
|
|
||||||
print('created post for Book', entry.title)
|
|
||||||
|
|
||||||
with open(os.path.join(post_dir, '.timestamp'), 'w') as f:
|
|
||||||
timestamp = arrow.get(entry['updated_parsed'])
|
|
||||||
f.write(timestamp.format('X'))
|
|
||||||
|
|
||||||
|
|
||||||
def main():
|
|
||||||
feed_urls = open("feeds_list_timeline.txt", "r").read().splitlines()
|
|
||||||
|
|
||||||
start = time.time()
|
|
||||||
|
|
||||||
if not os.path.exists("etags"):
|
|
||||||
os.mkdir("etags")
|
|
||||||
|
|
||||||
output_dir = os.environ.get("OUTPUT_DIR")
|
|
||||||
|
|
||||||
if not os.path.exists(output_dir):
|
|
||||||
os.makedirs(output_dir)
|
|
||||||
|
|
||||||
feed_dict = dict()
|
|
||||||
for url in feed_urls:
|
|
||||||
feed_name = urlparse(url).netloc
|
|
||||||
feed_dict[url] = feed_name
|
|
||||||
|
|
||||||
feed_names = feed_dict.values()
|
|
||||||
content_dirs = os.listdir(output_dir)
|
|
||||||
for i in content_dirs:
|
|
||||||
if i not in feed_names:
|
|
||||||
shutil.rmtree(os.path.join(output_dir, i))
|
|
||||||
print("%s not in feeds_list.txt, removing local data" %(i))
|
|
||||||
|
|
||||||
# add iframe to the allowlist of feedparser's sanitizer,
|
|
||||||
# this is now handled in parse_post()
|
|
||||||
feedparser.sanitizer._HTMLSanitizer.acceptable_elements |= {"iframe"}
|
|
||||||
|
|
||||||
for feed_url in feed_urls:
|
|
||||||
|
|
||||||
feed_name = feed_dict[feed_url]
|
|
||||||
|
|
||||||
feed_dir = os.path.join(output_dir, feed_name)
|
|
||||||
|
|
||||||
if not os.path.exists(feed_dir):
|
|
||||||
os.makedirs(feed_dir)
|
|
||||||
|
|
||||||
existing_posts = os.listdir(feed_dir)
|
|
||||||
|
|
||||||
data = grab_feed(feed_url)
|
|
||||||
|
|
||||||
if data:
|
|
||||||
|
|
||||||
opds_feed = False
|
|
||||||
for i in data.feed['links']:
|
|
||||||
if i['rel'] == 'self':
|
|
||||||
if 'opds' in i['type']:
|
|
||||||
opds_feed = True
|
|
||||||
print("OPDS type feed!")
|
|
||||||
|
|
||||||
|
|
||||||
for entry in data.entries:
|
|
||||||
# if 'tags' in entry:
|
|
||||||
# for tag in entry.tags:
|
|
||||||
# for x in ['lumbung.space', 'D15', 'lumbung']:
|
|
||||||
# if x in tag['term']:
|
|
||||||
# print(entry.title)
|
|
||||||
entry["feed_name"] = feed_name
|
|
||||||
|
|
||||||
post_name = slugify(entry.title)
|
|
||||||
|
|
||||||
# pixelfed returns the whole post text as the post name. max
|
|
||||||
# filename length is 255 on many systems. here we're shortening
|
|
||||||
# the name and adding a hash to it to avoid a conflict in a
|
|
||||||
# situation where 2 posts start with exactly the same text.
|
|
||||||
if len(post_name) > 150:
|
|
||||||
post_hash = md5(bytes(post_name, "utf-8"))
|
|
||||||
post_name = post_name[:150] + "-" + post_hash.hexdigest()
|
|
||||||
|
|
||||||
if opds_feed:
|
|
||||||
entry['opds'] = True
|
|
||||||
#format: Beyond-Debiasing-Report_Online-75535a4886e3
|
|
||||||
post_name = slugify(entry['title'])+'-'+entry['id'].split('-')[-1]
|
|
||||||
|
|
||||||
post_dir = os.path.join(output_dir, feed_name, post_name)
|
|
||||||
|
|
||||||
if post_name not in existing_posts:
|
|
||||||
# if there is a blog entry we dont already have, make it
|
|
||||||
if opds_feed:
|
|
||||||
create_opds_post(post_dir, entry)
|
|
||||||
else:
|
|
||||||
create_post(post_dir, entry)
|
|
||||||
|
|
||||||
elif post_name in existing_posts:
|
|
||||||
# if we already have it, update it
|
|
||||||
if opds_feed:
|
|
||||||
create_opds_post(post_dir, entry)
|
|
||||||
else:
|
|
||||||
create_post(post_dir, entry)
|
|
||||||
existing_posts.remove(
|
|
||||||
post_name
|
|
||||||
) # create list of posts which have not been returned by the feed
|
|
||||||
|
|
||||||
for post in existing_posts:
|
|
||||||
# remove blog posts no longer returned by the RSS feed
|
|
||||||
print("deleted", post)
|
|
||||||
shutil.rmtree(os.path.join(feed_dir, slugify(post)))
|
|
||||||
|
|
||||||
end = time.time()
|
|
||||||
|
|
||||||
print(end - start)
|
|
@ -1,8 +1,8 @@
|
|||||||
[tool.poetry]
|
[tool.poetry]
|
||||||
name = "konfluks"
|
name = "konfluks"
|
||||||
version = "0.1.0"
|
version = "0.1.0"
|
||||||
description = "Brings together small and dispersed streams of web content from different applications and websites together in a single large stream."
|
description = "Python lib which powers lumbung[dot]space automation"
|
||||||
authors = ["rra", "decentral1se", "knoflook"]
|
authors = ["rra", "decentral1se"]
|
||||||
license = "AGPLv3+"
|
license = "AGPLv3+"
|
||||||
|
|
||||||
[tool.poetry.dependencies]
|
[tool.poetry.dependencies]
|
||||||
@ -25,8 +25,7 @@ requires = ["poetry-core>=1.0.0"]
|
|||||||
build-backend = "poetry.core.masonry.api"
|
build-backend = "poetry.core.masonry.api"
|
||||||
|
|
||||||
[tool.poetry.scripts]
|
[tool.poetry.scripts]
|
||||||
konfluks-cal = "konfluks.calendars:main"
|
konfluks-cal = "konfluks.cloudcal:main"
|
||||||
konfluks-vid = "konfluks.video:main"
|
konfluks-vid = "konfluks.video:main"
|
||||||
konfluks-feed = "konfluks.feed:main"
|
konfluks-feed = "konfluks.feed:main"
|
||||||
konfluks-timeline = "konfluks.timeline:main"
|
|
||||||
konfluks-hash = "konfluks.hashtag:main"
|
konfluks-hash = "konfluks.hashtag:main"
|
||||||
|
4
setup.py
4
setup.py
@ -20,9 +20,8 @@ install_requires = \
|
|||||||
'requests>=2.26.0,<3.0.0']
|
'requests>=2.26.0,<3.0.0']
|
||||||
|
|
||||||
entry_points = \
|
entry_points = \
|
||||||
{'console_scripts': ['konfluks-cal = konfluks.calendars:main',
|
{'console_scripts': ['konfluks-cal = konfluks.cloudcal:main',
|
||||||
'konfluks-feed = konfluks.feed:main',
|
'konfluks-feed = konfluks.feed:main',
|
||||||
'konfluks-timeline = lumbunglib.timeline:main',
|
|
||||||
'konfluks-hash = konfluks.hashtag:main',
|
'konfluks-hash = konfluks.hashtag:main',
|
||||||
'konfluks-vid = konfluks.video:main']}
|
'konfluks-vid = konfluks.video:main']}
|
||||||
|
|
||||||
@ -45,3 +44,4 @@ setup_kwargs = {
|
|||||||
|
|
||||||
|
|
||||||
setup(**setup_kwargs)
|
setup(**setup_kwargs)
|
||||||
|
|
||||||
|
Loading…
Reference in New Issue
Block a user