Merge branch 'main' of gitlab.com:bashrc2/epicyon
|
@ -0,0 +1,8 @@
|
|||
pipeline:
|
||||
test:
|
||||
image: debian:testing
|
||||
commands:
|
||||
- apt-get update
|
||||
- apt-get install -y python3-socks imagemagick python3-setuptools python3-cryptography python3-dateutil python3-idna python3-requests python3-django-timezone-field libimage-exiftool-perl python3-flake8 python3-pyqrcode python3-png python3-bandit imagemagick gnupg
|
||||
- python3 epicyon.py --tests
|
||||
- python3 epicyon.py --testsnetwork
|
5
Makefile
|
@ -3,6 +3,8 @@ VERSION=1.1.0
|
|||
|
||||
all:
|
||||
debug:
|
||||
sbom:
|
||||
scanoss-py scan . > sbom.json
|
||||
source:
|
||||
rm -f *.*~ *~
|
||||
rm -f ontology/*~
|
||||
|
@ -17,6 +19,7 @@ source:
|
|||
rm -f ../${APP}*.deb ../${APP}*.changes ../${APP}*.asc ../${APP}*.dsc
|
||||
cd .. && mv ${APP} ${APP}-${VERSION} && tar -zcvf ${APP}_${VERSION}.orig.tar.gz ${APP}-${VERSION}/ && mv ${APP}-${VERSION} ${APP}
|
||||
clean:
|
||||
rm -f \#*
|
||||
rm -f *.*~ *~ *.dot
|
||||
rm -f orgs/*~
|
||||
rm -f ontology/*~
|
||||
|
@ -25,9 +28,11 @@ clean:
|
|||
rm -f theme/indymediaclassic/welcome/*~
|
||||
rm -f theme/indymediamodern/welcome/*~
|
||||
rm -f website/EN/*~
|
||||
rm -f cwlists/*~
|
||||
rm -f gemini/EN/*~
|
||||
rm -f scripts/*~
|
||||
rm -f deploy/*~
|
||||
rm -f translations/*~
|
||||
rm -f flycheck_*
|
||||
rm -rf __pycache__
|
||||
rm -f calendar.css blog.css epicyon.css follow.css login.css options.css search.css suspended.css
|
||||
|
|
100
README.md
|
@ -4,13 +4,33 @@ Add issues on https://gitlab.com/bashrc2/epicyon/-/issues
|
|||
|
||||
<blockquote><b>Epicyon</b>, meaning <i>"more than a dog"</i>. Largest of the <i>Borophaginae</i> which lived in North America 20-5 million years ago.</blockquote>
|
||||
|
||||
<img src="https://libreserver.org/epicyon/img/screenshot_starlight.jpg" width="80%"/>
|
||||
<img src="https://libreserver.org/epicyon/img/screenshot_rc3.jpg" width="80%"/>
|
||||
|
||||
<img src="https://libreserver.org/epicyon/img/mobile.jpg" width="30%"/>
|
||||
|
||||
Epicyon is a modern [ActivityPub](https://www.w3.org/TR/activitypub) compliant server implementing both S2S and C2S protocols and suitable for installation on single board computers. It includes features such as moderation tools, post expiry, content warnings, image descriptions, news feed and perimeter defense against adversaries. It contains *no JavaScript* and uses HTML+CSS with a Python backend.
|
||||
Epicyon is a [fediverse](https://en.wikipedia.org/wiki/Fediverse) server suitable for self-hosting a small number of accounts on low power systems.
|
||||
|
||||
[Project Goals](README_goals.md) - [Commandline interface](README_commandline.md) - [Customizations](README_customizations.md) - [Software Architecture](README_architecture.md) - [Code of Conduct](code-of-conduct.md)
|
||||
Key features:
|
||||
|
||||
* Open standards: HTML, CSS, ActivityPub, RSS, CalDAV.
|
||||
* Supports common web browsers and [shell browsers](https://lynx.invisible-island.net).
|
||||
* Will not drain your mobile or laptop battery.
|
||||
* Customisable themes. It doesn't have to look bland.
|
||||
* Emoji reactions.
|
||||
* Geospatial hashtags.
|
||||
* Does not require much RAM, either on server or client.
|
||||
* Suitable for installation on single board computers.
|
||||
* No timeline algorithms.
|
||||
* No javascript.
|
||||
* No database. Data stored as ordinary files.
|
||||
* No fashionable web frameworks. *"Boring by design"*.
|
||||
* No blockchain garbage.
|
||||
* Written in Python, with few dependencies.
|
||||
* AGPL license, which big tech hates.
|
||||
|
||||
Epicyon is for people who are tired of *big anything* and just want to DIY their online social experience without much fuss or expense. Think *water cooler discussions* rather than *shouting into the void*, in which you're mainly just reading and responding to the posts of people that you're following.
|
||||
|
||||
[Project Goals](README_goals.md) - [Commandline interface](README_commandline.md) - [Customizations](README_customizations.md) - [Software Architecture](README_architecture.md) - [Code of Conduct](code-of-conduct.md) - [Principles of Unity](principlesofunity.md) - [C2S Desktop Client](README_desktop_client.md) - [Coding Style](README_coding_style.md)
|
||||
|
||||
Matrix room: **#epicyon:matrix.libreserver.org**
|
||||
|
||||
|
@ -29,8 +49,8 @@ On Arch/Parabola:
|
|||
``` bash
|
||||
sudo pacman -S tor python-pip python-pysocks python-cryptography \
|
||||
imagemagick python-requests \
|
||||
perl-image-exiftool python-dateutil \
|
||||
certbot flake8 bandit
|
||||
perl-image-exiftool python-dateutil \
|
||||
certbot flake8 bandit
|
||||
sudo pip3 install pyqrcode pypng
|
||||
```
|
||||
|
||||
|
@ -55,6 +75,13 @@ In the most common case you'll be using systemd to set up a daemon to run the se
|
|||
|
||||
The following instructions install Epicyon to the **/opt** directory. It's not essential that it be installed there, and it could be in any other preferred directory.
|
||||
|
||||
Clone the repo, or if you downloaded the tarball then extract it into the **/opt** directory.
|
||||
|
||||
``` bash
|
||||
cd /opt
|
||||
git clone https://gitlab.com/bashrc2/epicyon
|
||||
```
|
||||
|
||||
Add a dedicated user so that we don't have to run as root.
|
||||
|
||||
``` bash
|
||||
|
@ -82,11 +109,32 @@ Type=simple
|
|||
User=epicyon
|
||||
Group=epicyon
|
||||
WorkingDirectory=/opt/epicyon
|
||||
ExecStart=/usr/bin/python3 /opt/epicyon/epicyon.py --port 443 --proxy 7156 --domain YOUR_DOMAIN --registration open --logLoginFailures
|
||||
ExecStart=/usr/bin/python3 /opt/epicyon/epicyon.py --port 443 --proxy 7156 --domain YOUR_DOMAIN --registration open --log_login_failures
|
||||
Environment=USER=epicyon
|
||||
Environment=PYTHONUNBUFFERED=true
|
||||
Restart=always
|
||||
StandardError=syslog
|
||||
CPUQuota=80%
|
||||
ProtectHome=true
|
||||
ProtectKernelTunables=true
|
||||
ProtectKernelModules=true
|
||||
ProtectControlGroups=true
|
||||
ProtectKernelLogs=true
|
||||
ProtectHostname=true
|
||||
ProtectClock=true
|
||||
ProtectProc=invisible
|
||||
ProcSubset=pid
|
||||
PrivateTmp=true
|
||||
PrivateUsers=true
|
||||
PrivateDevices=true
|
||||
PrivateIPC=true
|
||||
MemoryDenyWriteExecute=true
|
||||
NoNewPrivileges=true
|
||||
LockPersonality=true
|
||||
RestrictRealtime=true
|
||||
RestrictSUIDSGID=true
|
||||
RestrictNamespaces=true
|
||||
SystemCallArchitectures=native
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
|
@ -134,6 +182,16 @@ server {
|
|||
listen 443 ssl;
|
||||
server_name YOUR_DOMAIN;
|
||||
|
||||
gzip on;
|
||||
gzip_disable "msie6";
|
||||
gzip_vary on;
|
||||
gzip_proxied any;
|
||||
gzip_min_length 1024;
|
||||
gzip_comp_level 6;
|
||||
gzip_buffers 16 8k;
|
||||
gzip_http_version 1.1;
|
||||
gzip_types text/plain text/css application/json application/ld+json application/javascript text/xml application/xml application/rdf+xml application/xml+rss text/javascript;
|
||||
|
||||
ssl_stapling off;
|
||||
ssl_stapling_verify off;
|
||||
ssl on;
|
||||
|
@ -141,19 +199,19 @@ server {
|
|||
ssl_certificate_key /etc/letsencrypt/live/YOUR_DOMAIN/privkey.pem;
|
||||
#ssl_dhparam /etc/ssl/certs/YOUR_DOMAIN.dhparam;
|
||||
|
||||
ssl_session_cache builtin:1000 shared:SSL:10m;
|
||||
ssl_session_timeout 60m;
|
||||
ssl_prefer_server_ciphers on;
|
||||
ssl_protocols TLSv1.2 TLSv1.3;
|
||||
ssl_ciphers 'ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS';
|
||||
ssl_ciphers HIGH:!MEDIUM:!LOW:!aNULL:!NULL:!SHA;
|
||||
ssl_prefer_server_ciphers on;
|
||||
ssl_session_cache shared:SSL:10m;
|
||||
ssl_session_tickets off;
|
||||
|
||||
add_header Content-Security-Policy "default-src https:; script-src https: 'unsafe-inline'; style-src https: 'unsafe-inline'";
|
||||
add_header X-Frame-Options DENY;
|
||||
add_header X-Content-Type-Options nosniff;
|
||||
add_header X-XSS-Protection "1; mode=block";
|
||||
add_header X-Download-Options noopen;
|
||||
add_header X-Permitted-Cross-Domain-Policies none;
|
||||
|
||||
add_header X-Robots-Tag "noindex, nofollow, nosnippet, noarchive";
|
||||
add_header Strict-Transport-Security max-age=15768000;
|
||||
add_header Strict-Transport-Security "max-age=15768000; includeSubDomains; preload" always;
|
||||
|
||||
access_log /dev/null;
|
||||
error_log /dev/null;
|
||||
|
@ -165,6 +223,9 @@ server {
|
|||
try_files $uri =404;
|
||||
}
|
||||
|
||||
keepalive_timeout 70;
|
||||
sendfile on;
|
||||
|
||||
location / {
|
||||
proxy_http_version 1.1;
|
||||
client_max_body_size 31M;
|
||||
|
@ -184,6 +245,7 @@ server {
|
|||
proxy_request_buffering off;
|
||||
proxy_buffering off;
|
||||
proxy_pass http://localhost:7156;
|
||||
tcp_nodelay on;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
@ -197,7 +259,9 @@ ln -s /etc/nginx/sites-available/YOUR_DOMAIN /etc/nginx/sites-enabled/
|
|||
Generate a LetsEncrypt certificate.
|
||||
|
||||
``` bash
|
||||
systemctl stop nginx
|
||||
certbot certonly -n --server https://acme-v02.api.letsencrypt.org/directory --standalone -d YOUR_DOMAIN --renew-by-default --agree-tos --email YOUR_EMAIL
|
||||
systemctl start nginx
|
||||
```
|
||||
|
||||
And restart the web server:
|
||||
|
@ -278,3 +342,13 @@ To run the network tests. These simulate instances exchanging messages.
|
|||
``` bash
|
||||
python3 epicyon.py --testsnetwork
|
||||
```
|
||||
|
||||
## Software Bill of Materials
|
||||
|
||||
To update the software bill of materials:
|
||||
|
||||
``` bash
|
||||
sudo pip3 install scanoss
|
||||
make clean
|
||||
make sbom
|
||||
```
|
||||
|
|
|
@ -16,6 +16,10 @@ Although it can be single user, this is not strictly a single user system.
|
|||
|
||||
The design of this system is opinionated, and to a large extent informed by years of past experience in the fediverse. There is no claim to neutrality of any sort. Automatic removal of hellthreads and other common griefing tactics is an example of this.
|
||||
|
||||
### Privacy Sensitive Defaults
|
||||
|
||||
Follow approval should be required by default. This gives the user a chance to see who wants to follow them and make a decision. Also by default direct messages should not be permitted except with accounts that you are following. This helps to reduce spam and harrassment from random accounts in the wider fediverse. The aim is for the user to have a good experience by default, even if they have not yet built up any sort of block list.
|
||||
|
||||
### Resisting Centralization
|
||||
|
||||
Centralization is characterized by the typical fixation upon "scale" within the software industry. Systems which scale, in the way which is commonly understood, mean that a few individuals can control the social lives of many, and extract value from them in often cynical and manipulative ways.
|
||||
|
@ -24,7 +28,7 @@ In general, methods have been preferred which do not vertically scale. This incl
|
|||
|
||||
Being hostile towards the common notion of scaling means that this system will be of no interest to "big tech" and can't easily be used within extractive economic models without needing a substantial rewrite. This avoids the typical cooption strategies in which large companies eventually take over what was originally software developed by grassroots activists to address real community needs.
|
||||
|
||||
This system should however be able to scale rhizomatically with the deployment of many small instances federated together. Instead of scaling up, scale out. In a network of many small instances nobody has overall control and corporate capture is much more unlikely. Small instances also minimize the bureaucratic requirements for governance processes, which at medium to large scale eventually becomes tyrannical.
|
||||
This system should however be able to scale rhizomatically with the deployment of many small instances federated together. Instead of scaling up, scale out. In a network of many small instances nobody has overall control and corporate capture is far less feasible. Small instances also minimize the bureaucratic requirements for governance processes, which at medium to large scale eventually becomes tyrannical.
|
||||
|
||||
### Roles
|
||||
|
||||
|
@ -32,11 +36,11 @@ The roles within an instance are comparable to the crew roles onboard a ship, wi
|
|||
|
||||
### No Javascript
|
||||
|
||||
This is so that the system can be accessed and used normally with javascript in the web browser turned off. If you want to have good security then this is useful, since lack of javascript greatly reduces the attack surface and constrains adversaries to a limited number of vectors.
|
||||
This is so that the system can be accessed and used normally with javascript in the web browser turned off. If you want to have good security then this is useful, since lack of javascript greatly reduces the attack surface and constrains adversaries to a limited number of vectors. Not using javascript also makes this system usable in shell based browsers such as Lynx, or other less common browsers, which helps to avoid being locked in to a browser duopoly.
|
||||
|
||||
### Block Crawlers
|
||||
|
||||
Ordinarily web crawlers would not be a problem, but in the context of a social network even having crawlers index public posts can create ethical dilemmas in some circumstances. News instances may allow crawlers, but other types of instances should block them.
|
||||
Ordinarily web crawlers would not be a problem, but in the context of a social network even having crawlers index public posts can create ethical dilemmas in some circumstances. News and blogging instances may allow crawlers, but other types of instances should block them.
|
||||
|
||||
### No Local or Federated Timelines
|
||||
|
||||
|
@ -60,6 +64,9 @@ It is usually safe to assume that the federated network beyond your instance is
|
|||
|
||||
Where Json linked data signatures are supported there should not be arbitrary schema lookups via the web. Instead, recognized contexts should be added to *context.py*. This is in order to follow the principle of *no processing without full recognition*, in which the recognition step is not endlessly extendable by untrusted parties.
|
||||
|
||||
### Avoid Web Frameworks
|
||||
|
||||
In general avoid using web frameworks and instead use local modules which are prefixed with *webapp_*. Web frameworks are built for conventional software engineering by large companies who are designing for scale. They typically have database dependencies and contain a lot of hardcoded Google stuff or other things which will leak metadata or be incompatible with onion routing. Keeping up with web frameworks is a constant firefight. They also create a massive attack surface requiring constant vigilance.
|
||||
|
||||
## High Level Architecture
|
||||
|
||||
|
|
|
@ -0,0 +1,21 @@
|
|||
# Epicyon Coding Style
|
||||
|
||||
Try to keep to the typical PEP8 coding style supported by Python static analysis systems.
|
||||
|
||||
Variables all lower case and using underscores to separate words (snake case).
|
||||
|
||||
Variables sent via webforms (with name="someVariableName") or within config.json are usually CamelCase, in order to clearly distinguish those from ordinary program variables.
|
||||
|
||||
Procedural style. Think "C style in Python". Avoid classes and objects as far as possible. This avoids *obfuscation via abstractions*. With procedural style everything is maximally obvious/concrete and can be followed through step by step without needing a lot of implicit background knowledge. Procedural style also makes more rigorous static analysis possible, to catch bugs before they happen at runtime. Mantra: "In the long run, obviousness beats clever abstractions".
|
||||
|
||||
Declare all called functions individually at the top of each module. This avoids any possible mistakes with colliding function names, and allows static analysis to explicitly check all dependencies.
|
||||
|
||||
Don't use any features of Python which are not supported by the version of Python within the current Debian stable release. Don't assume that all users are running the latest cutting-edge Python release.
|
||||
|
||||
Before doing a commit run all the unit tests. There are three layers of testing. The first just checks PEP8 compliance. The second runs a more thorough static analysis and unit tests. The third simulates instances communicating with each other.
|
||||
|
||||
```bash
|
||||
./static_analysis
|
||||
python3 epicyon.py --tests
|
||||
python3 epicyon.py --testsnetwork
|
||||
```
|
|
@ -222,24 +222,32 @@ python3 epicyon.py --nickname [yournick] --domain [name] \
|
|||
--undolike [url] --password [c2s password]
|
||||
```
|
||||
|
||||
## Archiving posts
|
||||
## Archiving and Expiring posts
|
||||
|
||||
You can archive old posts with:
|
||||
As a general rule, all posts will be retained unless otherwise specified. However, on systems with finite and small disk storage running out of space is a show-stopping catastrophe and so clearing down old posts is highly advisable. You can achieve this using the archive commandline option, and optionally also with a cron job.
|
||||
|
||||
You can archive old posts and expire posts as specified within account profile settings with:
|
||||
|
||||
``` bash
|
||||
python3 epicyon.py --archive [directory]
|
||||
```
|
||||
|
||||
Which will move old posts to the given directory. You can also specify the number of weeks after which images will be archived, and the maximum number of posts within in/outboxes.
|
||||
Which will move old posts to the given directory and delete any expired posts. You can also specify the number of weeks after which images will be archived, and the maximum number of posts within in/outboxes.
|
||||
|
||||
``` bash
|
||||
python3 epicyon.py --archive [directory] --archiveweeks 4 --maxposts 256
|
||||
python3 epicyon.py --archive [directory] --archiveweeks 4 --maxposts 32000
|
||||
```
|
||||
|
||||
If you want old posts to be deleted for data minimization purposes then the archive location can be set to */dev/null*.
|
||||
|
||||
``` bash
|
||||
python3 epicyon.py --archive /dev/null --archiveweeks 4 --maxposts 256
|
||||
python3 epicyon.py --archive /dev/null --archiveweeks 4 --maxposts 32000
|
||||
```
|
||||
|
||||
You can put this command into a cron job to ensure that old posts are cleared down regularly. In */etc/crontab* add an entry such as:
|
||||
|
||||
``` bash
|
||||
*/60 * * * * root cd /opt/epicyon && /usr/bin/python3 epicyon.py --archive /dev/null --archiveweeks 4 --maxposts 32000
|
||||
```
|
||||
|
||||
## Blocking and unblocking
|
||||
|
@ -372,3 +380,31 @@ To remove a shared item:
|
|||
``` bash
|
||||
python3 epicyon.py --undoItemName "spanner" --nickname [yournick] --domain [yourdomain] --password [c2s password]
|
||||
```
|
||||
|
||||
## Calendar
|
||||
|
||||
The calendar for each account can be accessed via CalDav (RFC4791). This makes it easy to integrate the social calendar into other applications. For example, to obtain events for a month:
|
||||
|
||||
```bash
|
||||
python3 epicyon.py --dav --nickname [yournick] --domain [yourdomain] --year [year] --month [month number]
|
||||
```
|
||||
|
||||
You will be prompted for your login password, or you can use the **--password** option. You can also use the **--day** option to obtain events for a particular day.
|
||||
|
||||
The CalDav endpoint for an account is:
|
||||
|
||||
```bash
|
||||
yourdomain/calendars/yournick
|
||||
```
|
||||
|
||||
## Web Crawlers
|
||||
|
||||
Having search engines index social media posts is not usually considered appropriate, since even if "public" they may contain personally identifiable information. If you are running a news instance then web crawlers will be permitted by the system, but otherwise by default they will be blocked.
|
||||
|
||||
If you want to allow specific web crawlers then when running the daemon (typically with systemd) you can use the **crawlersAllowed** option. It can take a list of bot names, separated by commas. For example:
|
||||
|
||||
```bash
|
||||
--crawlersAllowed "googlebot, apple"
|
||||
```
|
||||
|
||||
Typically web crawlers have names ending in "bot", but partial names can also be used.
|
||||
|
|
|
@ -26,6 +26,20 @@ When a moderator report is created the message at the top of the screen can be c
|
|||
|
||||
Extra emoji can be added to the *emoji* directory and you should then update the **emoji/emoji.json** file, which maps the name to the filename (without the .png extension).
|
||||
|
||||
Another way to import emoji is to create a text file where each line is the url of the emoji png file and the emoji name, separated by a comma.
|
||||
|
||||
```bash
|
||||
https://somesite/emoji1.png, :emojiname1:
|
||||
https://somesite/emoji2.png, :emojiname2:
|
||||
https://somesite/emoji3.png, :emojiname3:
|
||||
```
|
||||
|
||||
Then this can be imported with:
|
||||
|
||||
```bash
|
||||
python3 epicyon.py --import-emoji [textfile]
|
||||
```
|
||||
|
||||
## Themes
|
||||
|
||||
If you want to create a new theme then the functions for that are within *theme.py*. These functions take the CSS templates and modify them. You will need to edit *themesDropdown* within *webinterface.py* and add the appropriate translations for the theme name. Themes are selectable from the profile screen of the administrator.
|
||||
If you want to create a new theme then copy the *default* directory within the *theme* directory, rename it to your new theme name, then you can edit the colors and fonts within *theme.json*, and change the icons and banners. Themes are selectable from the graphic design section of the profile screen of the administrator, or of any accounts having the *artist* role.
|
||||
|
|
|
@ -1,4 +1,6 @@
|
|||
# Desktop client
|
||||
# C2S Desktop client
|
||||
|
||||
<img src="https://libreserver.org/epicyon/img/desktop_client.jpg" width="80%"/>
|
||||
|
||||
## Installing and running
|
||||
|
||||
|
@ -26,6 +28,12 @@ Or if you have picospeaker installed:
|
|||
~/epicyon-client-pico
|
||||
```
|
||||
|
||||
Or if you have mimic3 installed:
|
||||
|
||||
``` bash
|
||||
~/epicyon-client-mimic3
|
||||
```
|
||||
|
||||
## Commands
|
||||
|
||||
The desktop client has a few commands, which may be more convenient than the web interface for some purposes:
|
||||
|
@ -85,7 +93,13 @@ Or a quicker version, if you have installed the desktop client as described abov
|
|||
Or if you have [picospeaker](https://gitlab.com/ky1e/picospeaker) installed:
|
||||
|
||||
``` bash
|
||||
python3 epicyon.py --notifyShowNewPosts --screenreader picospeaker --desktop yournickname@yourdomain
|
||||
~/epicyon-stream-pico
|
||||
```
|
||||
|
||||
Or if you have mimic3 installed:
|
||||
|
||||
``` bash
|
||||
~/epicyon-stream-mimic3
|
||||
```
|
||||
|
||||
You can also use the **--password** option to provide the password. This will then stay running and incoming posts will be announced as they arrive.
|
||||
|
|
|
@ -45,6 +45,7 @@ The following are considered anti-features of other social network systems, sinc
|
|||
* Algorithmic timelines (i.e. non-chronological)
|
||||
* Direct payment mechanisms, although integration with other services may be possible
|
||||
* Any variety of blockchain
|
||||
* Non Fungible Token (NFT) features
|
||||
* Anything based upon "proof of stake". The "people who have more, get more" principle should be rejected.
|
||||
* Like counts above some small maximum number. The aim is to avoid people getting addicted to making numbers go up, and especially to avoid the dark market in fake likes.
|
||||
* Sponsored posts
|
||||
|
|
|
@ -7,7 +7,6 @@
|
|||
## Groups
|
||||
|
||||
* Groups can be defined as having particular roles/skills
|
||||
* Parse posts from Lemmy groups
|
||||
* Think of a way to display groups. Maybe assign a hashtag and display them like hashtag timelines
|
||||
|
||||
## Questions
|
||||
|
@ -20,7 +19,5 @@
|
|||
## Code
|
||||
|
||||
* More unit test coverage
|
||||
* Unit test for federated shared items
|
||||
* Break up large functions into smaller ones
|
||||
* Architecture diagrams
|
||||
* Code documentation?
|
||||
|
|
261
acceptreject.py
|
@ -1,220 +1,231 @@
|
|||
__filename__ = "acceptreject.py"
|
||||
__author__ = "Bob Mottram"
|
||||
__license__ = "AGPL3+"
|
||||
__version__ = "1.2.0"
|
||||
__version__ = "1.3.0"
|
||||
__maintainer__ = "Bob Mottram"
|
||||
__email__ = "bob@libreserver.org"
|
||||
__status__ = "Production"
|
||||
__module_group__ = "ActivityPub"
|
||||
|
||||
import os
|
||||
from utils import hasUsersPath
|
||||
from utils import getFullDomain
|
||||
from utils import urlPermitted
|
||||
from utils import getDomainFromActor
|
||||
from utils import getNicknameFromActor
|
||||
from utils import domainPermitted
|
||||
from utils import followPerson
|
||||
from utils import hasObjectDict
|
||||
from utils import acctDir
|
||||
from utils import hasGroupType
|
||||
from utils import localActorUrl
|
||||
from utils import text_in_file
|
||||
from utils import has_object_string_object
|
||||
from utils import has_users_path
|
||||
from utils import get_full_domain
|
||||
from utils import url_permitted
|
||||
from utils import get_domain_from_actor
|
||||
from utils import get_nickname_from_actor
|
||||
from utils import domain_permitted
|
||||
from utils import follow_person
|
||||
from utils import acct_dir
|
||||
from utils import has_group_type
|
||||
from utils import local_actor_url
|
||||
from utils import has_actor
|
||||
from utils import has_object_string_type
|
||||
|
||||
|
||||
def _createAcceptReject(baseDir: str, federationList: [],
|
||||
nickname: str, domain: str, port: int,
|
||||
toUrl: str, ccUrl: str, httpPrefix: str,
|
||||
objectJson: {}, acceptType: str) -> {}:
|
||||
def _create_accept_reject(base_dir: str, federation_list: [],
|
||||
nickname: str, domain: str, port: int,
|
||||
to_url: str, cc_url: str, http_prefix: str,
|
||||
object_json: {}, accept_type: str) -> {}:
|
||||
"""Accepts or rejects something (eg. a follow request or offer)
|
||||
Typically toUrl will be https://www.w3.org/ns/activitystreams#Public
|
||||
and ccUrl might be a specific person favorited or repeated and
|
||||
Typically to_url will be https://www.w3.org/ns/activitystreams#Public
|
||||
and cc_url might be a specific person favorited or repeated and
|
||||
the followers url objectUrl is typically the url of the message,
|
||||
corresponding to url or atomUri in createPostBase
|
||||
"""
|
||||
if not objectJson.get('actor'):
|
||||
if not object_json.get('actor'):
|
||||
return None
|
||||
|
||||
if not urlPermitted(objectJson['actor'], federationList):
|
||||
if not url_permitted(object_json['actor'], federation_list):
|
||||
return None
|
||||
|
||||
domain = getFullDomain(domain, port)
|
||||
domain = get_full_domain(domain, port)
|
||||
|
||||
newAccept = {
|
||||
new_accept = {
|
||||
"@context": "https://www.w3.org/ns/activitystreams",
|
||||
'type': acceptType,
|
||||
'actor': localActorUrl(httpPrefix, nickname, domain),
|
||||
'to': [toUrl],
|
||||
'type': accept_type,
|
||||
'actor': local_actor_url(http_prefix, nickname, domain),
|
||||
'to': [to_url],
|
||||
'cc': [],
|
||||
'object': objectJson
|
||||
'object': object_json
|
||||
}
|
||||
if ccUrl:
|
||||
if len(ccUrl) > 0:
|
||||
newAccept['cc'] = [ccUrl]
|
||||
return newAccept
|
||||
if cc_url:
|
||||
if len(cc_url) > 0:
|
||||
new_accept['cc'] = [cc_url]
|
||||
return new_accept
|
||||
|
||||
|
||||
def createAccept(baseDir: str, federationList: [],
|
||||
nickname: str, domain: str, port: int,
|
||||
toUrl: str, ccUrl: str, httpPrefix: str,
|
||||
objectJson: {}) -> {}:
|
||||
return _createAcceptReject(baseDir, federationList,
|
||||
nickname, domain, port,
|
||||
toUrl, ccUrl, httpPrefix,
|
||||
objectJson, 'Accept')
|
||||
def create_accept(base_dir: str, federation_list: [],
|
||||
nickname: str, domain: str, port: int,
|
||||
to_url: str, cc_url: str, http_prefix: str,
|
||||
object_json: {}) -> {}:
|
||||
return _create_accept_reject(base_dir, federation_list,
|
||||
nickname, domain, port,
|
||||
to_url, cc_url, http_prefix,
|
||||
object_json, 'Accept')
|
||||
|
||||
|
||||
def createReject(baseDir: str, federationList: [],
|
||||
nickname: str, domain: str, port: int,
|
||||
toUrl: str, ccUrl: str, httpPrefix: str,
|
||||
objectJson: {}) -> {}:
|
||||
return _createAcceptReject(baseDir, federationList,
|
||||
nickname, domain, port,
|
||||
toUrl, ccUrl,
|
||||
httpPrefix, objectJson, 'Reject')
|
||||
def create_reject(base_dir: str, federation_list: [],
|
||||
nickname: str, domain: str, port: int,
|
||||
to_url: str, cc_url: str, http_prefix: str,
|
||||
object_json: {}) -> {}:
|
||||
return _create_accept_reject(base_dir, federation_list,
|
||||
nickname, domain, port,
|
||||
to_url, cc_url,
|
||||
http_prefix, object_json, 'Reject')
|
||||
|
||||
|
||||
def _acceptFollow(baseDir: str, domain: str, messageJson: {},
|
||||
federationList: [], debug: bool) -> None:
|
||||
def _accept_follow(base_dir: str, message_json: {},
|
||||
federation_list: [], debug: bool,
|
||||
curr_domain: str,
|
||||
onion_domain: str, i2p_domain: str) -> None:
|
||||
"""Receiving a follow Accept activity
|
||||
"""
|
||||
if not hasObjectDict(messageJson):
|
||||
if not has_object_string_type(message_json, debug):
|
||||
return
|
||||
if not messageJson['object'].get('type'):
|
||||
return
|
||||
if not messageJson['object']['type'] == 'Follow':
|
||||
if not messageJson['object']['type'] == 'Join':
|
||||
if not message_json['object']['type'] == 'Follow':
|
||||
if not message_json['object']['type'] == 'Join':
|
||||
return
|
||||
if debug:
|
||||
print('DEBUG: receiving Follow activity')
|
||||
if not messageJson['object'].get('actor'):
|
||||
if not message_json['object'].get('actor'):
|
||||
print('DEBUG: no actor in Follow activity')
|
||||
return
|
||||
# no, this isn't a mistake
|
||||
if not messageJson['object'].get('object'):
|
||||
print('DEBUG: no object within Follow activity')
|
||||
if not has_object_string_object(message_json, debug):
|
||||
return
|
||||
if not messageJson.get('to'):
|
||||
if not message_json.get('to'):
|
||||
if debug:
|
||||
print('DEBUG: No "to" parameter in follow Accept')
|
||||
return
|
||||
if debug:
|
||||
print('DEBUG: follow Accept received')
|
||||
thisActor = messageJson['object']['actor']
|
||||
nickname = getNicknameFromActor(thisActor)
|
||||
print('DEBUG: follow Accept received ' + str(message_json))
|
||||
this_actor = message_json['object']['actor']
|
||||
nickname = get_nickname_from_actor(this_actor)
|
||||
if not nickname:
|
||||
print('WARN: no nickname found in ' + thisActor)
|
||||
print('WARN: no nickname found in ' + this_actor)
|
||||
return
|
||||
acceptedDomain, acceptedPort = getDomainFromActor(thisActor)
|
||||
if not acceptedDomain:
|
||||
accepted_domain, accepted_port = get_domain_from_actor(this_actor)
|
||||
if not accepted_domain:
|
||||
if debug:
|
||||
print('DEBUG: domain not found in ' + thisActor)
|
||||
print('DEBUG: domain not found in ' + this_actor)
|
||||
return
|
||||
if not nickname:
|
||||
if debug:
|
||||
print('DEBUG: nickname not found in ' + thisActor)
|
||||
print('DEBUG: nickname not found in ' + this_actor)
|
||||
return
|
||||
if acceptedPort:
|
||||
if '/' + acceptedDomain + ':' + str(acceptedPort) + \
|
||||
'/users/' + nickname not in thisActor:
|
||||
if accepted_port:
|
||||
if '/' + accepted_domain + ':' + str(accepted_port) + \
|
||||
'/users/' + nickname not in this_actor:
|
||||
if debug:
|
||||
print('Port: ' + str(acceptedPort))
|
||||
print('Expected: /' + acceptedDomain + ':' +
|
||||
str(acceptedPort) + '/users/' + nickname)
|
||||
print('Actual: ' + thisActor)
|
||||
print('DEBUG: unrecognized actor ' + thisActor)
|
||||
print('Port: ' + str(accepted_port))
|
||||
print('Expected: /' + accepted_domain + ':' +
|
||||
str(accepted_port) + '/users/' + nickname)
|
||||
print('Actual: ' + this_actor)
|
||||
print('DEBUG: unrecognized actor ' + this_actor)
|
||||
return
|
||||
else:
|
||||
if not '/' + acceptedDomain + '/users/' + nickname in thisActor:
|
||||
if not '/' + accepted_domain + '/users/' + nickname in this_actor:
|
||||
if debug:
|
||||
print('Expected: /' + acceptedDomain + '/users/' + nickname)
|
||||
print('Actual: ' + thisActor)
|
||||
print('DEBUG: unrecognized actor ' + thisActor)
|
||||
print('Expected: /' + accepted_domain + '/users/' + nickname)
|
||||
print('Actual: ' + this_actor)
|
||||
print('DEBUG: unrecognized actor ' + this_actor)
|
||||
return
|
||||
followedActor = messageJson['object']['object']
|
||||
followedDomain, port = getDomainFromActor(followedActor)
|
||||
if not followedDomain:
|
||||
followed_actor = message_json['object']['object']
|
||||
followed_domain, port = get_domain_from_actor(followed_actor)
|
||||
if not followed_domain:
|
||||
print('DEBUG: no domain found within Follow activity object ' +
|
||||
followedActor)
|
||||
followed_actor)
|
||||
return
|
||||
followedDomainFull = followedDomain
|
||||
followed_domain_full = followed_domain
|
||||
if port:
|
||||
followedDomainFull = followedDomain + ':' + str(port)
|
||||
followedNickname = getNicknameFromActor(followedActor)
|
||||
if not followedNickname:
|
||||
followed_domain_full = followed_domain + ':' + str(port)
|
||||
followed_nickname = get_nickname_from_actor(followed_actor)
|
||||
if not followed_nickname:
|
||||
print('DEBUG: no nickname found within Follow activity object ' +
|
||||
followedActor)
|
||||
followed_actor)
|
||||
return
|
||||
|
||||
acceptedDomainFull = acceptedDomain
|
||||
if acceptedPort:
|
||||
acceptedDomainFull = acceptedDomain + ':' + str(acceptedPort)
|
||||
# convert from onion/i2p to clearnet accepted domain
|
||||
if onion_domain:
|
||||
if accepted_domain.endswith('.onion') and \
|
||||
not curr_domain.endswith('.onion'):
|
||||
accepted_domain = curr_domain
|
||||
if i2p_domain:
|
||||
if accepted_domain.endswith('.i2p') and \
|
||||
not curr_domain.endswith('.i2p'):
|
||||
accepted_domain = curr_domain
|
||||
|
||||
accepted_domain_full = accepted_domain
|
||||
if accepted_port:
|
||||
accepted_domain_full = accepted_domain + ':' + str(accepted_port)
|
||||
|
||||
# has this person already been unfollowed?
|
||||
unfollowedFilename = \
|
||||
acctDir(baseDir, nickname, acceptedDomainFull) + '/unfollowed.txt'
|
||||
if os.path.isfile(unfollowedFilename):
|
||||
if followedNickname + '@' + followedDomainFull in \
|
||||
open(unfollowedFilename).read():
|
||||
unfollowed_filename = \
|
||||
acct_dir(base_dir, nickname, accepted_domain_full) + '/unfollowed.txt'
|
||||
if os.path.isfile(unfollowed_filename):
|
||||
if text_in_file(followed_nickname + '@' + followed_domain_full,
|
||||
unfollowed_filename):
|
||||
if debug:
|
||||
print('DEBUG: follow accept arrived for ' +
|
||||
nickname + '@' + acceptedDomainFull +
|
||||
' from ' + followedNickname + '@' + followedDomainFull +
|
||||
nickname + '@' + accepted_domain_full +
|
||||
' from ' +
|
||||
followed_nickname + '@' + followed_domain_full +
|
||||
' but they have been unfollowed')
|
||||
return
|
||||
|
||||
# does the url path indicate that this is a group actor
|
||||
groupAccount = hasGroupType(baseDir, followedActor, None, debug)
|
||||
group_account = has_group_type(base_dir, followed_actor, None, debug)
|
||||
if debug:
|
||||
print('Accepted follow is a group: ' + str(groupAccount) +
|
||||
' ' + followedActor + ' ' + baseDir)
|
||||
print('Accepted follow is a group: ' + str(group_account) +
|
||||
' ' + followed_actor + ' ' + base_dir)
|
||||
|
||||
if followPerson(baseDir,
|
||||
nickname, acceptedDomainFull,
|
||||
followedNickname, followedDomainFull,
|
||||
federationList, debug, groupAccount):
|
||||
if follow_person(base_dir,
|
||||
nickname, accepted_domain_full,
|
||||
followed_nickname, followed_domain_full,
|
||||
federation_list, debug, group_account):
|
||||
if debug:
|
||||
print('DEBUG: ' + nickname + '@' + acceptedDomainFull +
|
||||
' followed ' + followedNickname + '@' + followedDomainFull)
|
||||
print('DEBUG: ' + nickname + '@' + accepted_domain_full +
|
||||
' followed ' +
|
||||
followed_nickname + '@' + followed_domain_full)
|
||||
else:
|
||||
if debug:
|
||||
print('DEBUG: Unable to create follow - ' +
|
||||
nickname + '@' + acceptedDomain + ' -> ' +
|
||||
followedNickname + '@' + followedDomain)
|
||||
nickname + '@' + accepted_domain + ' -> ' +
|
||||
followed_nickname + '@' + followed_domain)
|
||||
|
||||
|
||||
def receiveAcceptReject(session, baseDir: str,
|
||||
httpPrefix: str, domain: str, port: int,
|
||||
sendThreads: [], postLog: [], cachedWebfingers: {},
|
||||
personCache: {}, messageJson: {}, federationList: [],
|
||||
debug: bool) -> bool:
|
||||
def receive_accept_reject(base_dir: str, domain: str, message_json: {},
|
||||
federation_list: [], debug: bool, curr_domain: str,
|
||||
onion_domain: str, i2p_domain: str) -> bool:
|
||||
"""Receives an Accept or Reject within the POST section of HTTPServer
|
||||
"""
|
||||
if messageJson['type'] != 'Accept' and messageJson['type'] != 'Reject':
|
||||
if message_json['type'] != 'Accept' and message_json['type'] != 'Reject':
|
||||
return False
|
||||
if not messageJson.get('actor'):
|
||||
if debug:
|
||||
print('DEBUG: ' + messageJson['type'] + ' has no actor')
|
||||
if not has_actor(message_json, debug):
|
||||
return False
|
||||
if not hasUsersPath(messageJson['actor']):
|
||||
if not has_users_path(message_json['actor']):
|
||||
if debug:
|
||||
print('DEBUG: "users" or "profile" missing from actor in ' +
|
||||
messageJson['type'] + '. Assuming single user instance.')
|
||||
domain, tempPort = getDomainFromActor(messageJson['actor'])
|
||||
if not domainPermitted(domain, federationList):
|
||||
message_json['type'] + '. Assuming single user instance.')
|
||||
domain, _ = get_domain_from_actor(message_json['actor'])
|
||||
if not domain_permitted(domain, federation_list):
|
||||
if debug:
|
||||
print('DEBUG: ' + messageJson['type'] +
|
||||
print('DEBUG: ' + message_json['type'] +
|
||||
' from domain not permitted - ' + domain)
|
||||
return False
|
||||
nickname = getNicknameFromActor(messageJson['actor'])
|
||||
nickname = get_nickname_from_actor(message_json['actor'])
|
||||
if not nickname:
|
||||
# single user instance
|
||||
nickname = 'dev'
|
||||
if debug:
|
||||
print('DEBUG: ' + messageJson['type'] +
|
||||
print('DEBUG: ' + message_json['type'] +
|
||||
' does not contain a nickname. ' +
|
||||
'Assuming single user instance.')
|
||||
# receive follow accept
|
||||
_acceptFollow(baseDir, domain, messageJson, federationList, debug)
|
||||
_accept_follow(base_dir, message_json, federation_list, debug,
|
||||
curr_domain, onion_domain, i2p_domain)
|
||||
if debug:
|
||||
print('DEBUG: Uh, ' + messageJson['type'] + ', I guess')
|
||||
print('DEBUG: Uh, ' + message_json['type'] + ', I guess')
|
||||
return True
|
||||
|
|
546
announce.py
|
@ -1,433 +1,445 @@
|
|||
__filename__ = "announce.py"
|
||||
__author__ = "Bob Mottram"
|
||||
__license__ = "AGPL3+"
|
||||
__version__ = "1.2.0"
|
||||
__version__ = "1.3.0"
|
||||
__maintainer__ = "Bob Mottram"
|
||||
__email__ = "bob@libreserver.org"
|
||||
__status__ = "Production"
|
||||
__module_group__ = "ActivityPub"
|
||||
|
||||
from utils import hasGroupType
|
||||
from utils import removeDomainPort
|
||||
from utils import hasObjectDict
|
||||
from utils import removeIdEnding
|
||||
from utils import hasUsersPath
|
||||
from utils import getFullDomain
|
||||
from utils import getStatusNumber
|
||||
from utils import createOutboxDir
|
||||
from utils import urlPermitted
|
||||
from utils import getNicknameFromActor
|
||||
from utils import getDomainFromActor
|
||||
from utils import locatePost
|
||||
from utils import saveJson
|
||||
from utils import undoAnnounceCollectionEntry
|
||||
from utils import updateAnnounceCollection
|
||||
from utils import localActorUrl
|
||||
from utils import replaceUsersWithAt
|
||||
from posts import sendSignedJson
|
||||
from posts import getPersonBox
|
||||
from session import postJson
|
||||
from webfinger import webfingerHandle
|
||||
from auth import createBasicAuthHeader
|
||||
from utils import has_object_string_object
|
||||
from utils import has_group_type
|
||||
from utils import has_object_dict
|
||||
from utils import remove_domain_port
|
||||
from utils import remove_id_ending
|
||||
from utils import has_users_path
|
||||
from utils import get_full_domain
|
||||
from utils import get_status_number
|
||||
from utils import create_outbox_dir
|
||||
from utils import url_permitted
|
||||
from utils import get_nickname_from_actor
|
||||
from utils import get_domain_from_actor
|
||||
from utils import locate_post
|
||||
from utils import save_json
|
||||
from utils import undo_announce_collection_entry
|
||||
from utils import update_announce_collection
|
||||
from utils import local_actor_url
|
||||
from utils import replace_users_with_at
|
||||
from utils import has_actor
|
||||
from utils import has_object_string_type
|
||||
from posts import send_signed_json
|
||||
from posts import get_person_box
|
||||
from session import post_json
|
||||
from webfinger import webfinger_handle
|
||||
from auth import create_basic_auth_header
|
||||
|
||||
|
||||
def isSelfAnnounce(postJsonObject: {}) -> bool:
|
||||
def no_of_announces(post_json_object: {}) -> int:
|
||||
"""Returns the number of announces on a given post
|
||||
"""
|
||||
obj = post_json_object
|
||||
if has_object_dict(post_json_object):
|
||||
obj = post_json_object['object']
|
||||
if not obj.get('shares'):
|
||||
return 0
|
||||
if not isinstance(obj['shares'], dict):
|
||||
return 0
|
||||
if not obj['shares'].get('items'):
|
||||
obj['shares']['items'] = []
|
||||
obj['shares']['totalItems'] = 0
|
||||
return len(obj['shares']['items'])
|
||||
|
||||
|
||||
def is_self_announce(post_json_object: {}) -> bool:
|
||||
"""Is the given post a self announce?
|
||||
"""
|
||||
if not postJsonObject.get('actor'):
|
||||
if not post_json_object.get('actor'):
|
||||
return False
|
||||
if not postJsonObject.get('type'):
|
||||
if not post_json_object.get('type'):
|
||||
return False
|
||||
if postJsonObject['type'] != 'Announce':
|
||||
if post_json_object['type'] != 'Announce':
|
||||
return False
|
||||
if not postJsonObject.get('object'):
|
||||
if not post_json_object.get('object'):
|
||||
return False
|
||||
if not isinstance(postJsonObject['actor'], str):
|
||||
if not isinstance(post_json_object['actor'], str):
|
||||
return False
|
||||
if not isinstance(postJsonObject['object'], str):
|
||||
if not isinstance(post_json_object['object'], str):
|
||||
return False
|
||||
return postJsonObject['actor'] in postJsonObject['object']
|
||||
return post_json_object['actor'] in post_json_object['object']
|
||||
|
||||
|
||||
def outboxAnnounce(recentPostsCache: {},
|
||||
baseDir: str, messageJson: {}, debug: bool) -> bool:
|
||||
def outbox_announce(recent_posts_cache: {},
|
||||
base_dir: str, message_json: {}, debug: bool) -> bool:
|
||||
""" Adds or removes announce entries from the shares collection
|
||||
within a given post
|
||||
"""
|
||||
if not messageJson.get('actor'):
|
||||
if not has_actor(message_json, debug):
|
||||
return False
|
||||
if not isinstance(messageJson['actor'], str):
|
||||
if not isinstance(message_json['actor'], str):
|
||||
return False
|
||||
if not messageJson.get('type'):
|
||||
if not message_json.get('type'):
|
||||
return False
|
||||
if not messageJson.get('object'):
|
||||
if not message_json.get('object'):
|
||||
return False
|
||||
if messageJson['type'] == 'Announce':
|
||||
if not isinstance(messageJson['object'], str):
|
||||
if message_json['type'] == 'Announce':
|
||||
if not isinstance(message_json['object'], str):
|
||||
return False
|
||||
if isSelfAnnounce(messageJson):
|
||||
if is_self_announce(message_json):
|
||||
return False
|
||||
nickname = getNicknameFromActor(messageJson['actor'])
|
||||
nickname = get_nickname_from_actor(message_json['actor'])
|
||||
if not nickname:
|
||||
print('WARN: no nickname found in ' + messageJson['actor'])
|
||||
print('WARN: no nickname found in ' + message_json['actor'])
|
||||
return False
|
||||
domain, port = getDomainFromActor(messageJson['actor'])
|
||||
postFilename = locatePost(baseDir, nickname, domain,
|
||||
messageJson['object'])
|
||||
if postFilename:
|
||||
updateAnnounceCollection(recentPostsCache, baseDir, postFilename,
|
||||
messageJson['actor'],
|
||||
nickname, domain, debug)
|
||||
domain, _ = get_domain_from_actor(message_json['actor'])
|
||||
post_filename = locate_post(base_dir, nickname, domain,
|
||||
message_json['object'])
|
||||
if post_filename:
|
||||
update_announce_collection(recent_posts_cache,
|
||||
base_dir, post_filename,
|
||||
message_json['actor'],
|
||||
nickname, domain, debug)
|
||||
return True
|
||||
elif messageJson['type'] == 'Undo':
|
||||
if not hasObjectDict(messageJson):
|
||||
elif message_json['type'] == 'Undo':
|
||||
if not has_object_string_type(message_json, debug):
|
||||
return False
|
||||
if not messageJson['object'].get('type'):
|
||||
return False
|
||||
if messageJson['object']['type'] == 'Announce':
|
||||
if not isinstance(messageJson['object']['object'], str):
|
||||
if message_json['object']['type'] == 'Announce':
|
||||
if not isinstance(message_json['object']['object'], str):
|
||||
return False
|
||||
nickname = getNicknameFromActor(messageJson['actor'])
|
||||
nickname = get_nickname_from_actor(message_json['actor'])
|
||||
if not nickname:
|
||||
print('WARN: no nickname found in ' + messageJson['actor'])
|
||||
print('WARN: no nickname found in ' + message_json['actor'])
|
||||
return False
|
||||
domain, port = getDomainFromActor(messageJson['actor'])
|
||||
postFilename = locatePost(baseDir, nickname, domain,
|
||||
messageJson['object']['object'])
|
||||
if postFilename:
|
||||
undoAnnounceCollectionEntry(recentPostsCache,
|
||||
baseDir, postFilename,
|
||||
messageJson['actor'],
|
||||
domain, debug)
|
||||
domain, _ = get_domain_from_actor(message_json['actor'])
|
||||
post_filename = locate_post(base_dir, nickname, domain,
|
||||
message_json['object']['object'])
|
||||
if post_filename:
|
||||
undo_announce_collection_entry(recent_posts_cache,
|
||||
base_dir, post_filename,
|
||||
message_json['actor'],
|
||||
domain, debug)
|
||||
return True
|
||||
return False
|
||||
|
||||
|
||||
def announcedByPerson(isAnnounced: bool, postActor: str,
|
||||
nickname: str, domainFull: str) -> bool:
|
||||
def announced_by_person(is_announced: bool, post_actor: str,
|
||||
nickname: str, domain_full: str) -> bool:
|
||||
"""Returns True if the given post is announced by the given person
|
||||
"""
|
||||
if not postActor:
|
||||
if not post_actor:
|
||||
return False
|
||||
if isAnnounced and \
|
||||
postActor.endswith(domainFull + '/users/' + nickname):
|
||||
if is_announced and \
|
||||
post_actor.endswith(domain_full + '/users/' + nickname):
|
||||
return True
|
||||
return False
|
||||
|
||||
|
||||
def createAnnounce(session, baseDir: str, federationList: [],
|
||||
nickname: str, domain: str, port: int,
|
||||
toUrl: str, ccUrl: str, httpPrefix: str,
|
||||
objectUrl: str, saveToFile: bool,
|
||||
clientToServer: bool,
|
||||
sendThreads: [], postLog: [],
|
||||
personCache: {}, cachedWebfingers: {},
|
||||
debug: bool, projectVersion: str,
|
||||
signingPrivateKeyPem: str) -> {}:
|
||||
def create_announce(session, base_dir: str, federation_list: [],
|
||||
nickname: str, domain: str, port: int,
|
||||
to_url: str, cc_url: str, http_prefix: str,
|
||||
object_url: str, save_to_file: bool,
|
||||
client_to_server: bool,
|
||||
send_threads: [], post_log: [],
|
||||
person_cache: {}, cached_webfingers: {},
|
||||
debug: bool, project_version: str,
|
||||
signing_priv_key_pem: str,
|
||||
curr_domain: str,
|
||||
onion_domain: str, i2p_domain: str) -> {}:
|
||||
"""Creates an announce message
|
||||
Typically toUrl will be https://www.w3.org/ns/activitystreams#Public
|
||||
and ccUrl might be a specific person favorited or repeated and the
|
||||
followers url objectUrl is typically the url of the message,
|
||||
Typically to_url will be https://www.w3.org/ns/activitystreams#Public
|
||||
and cc_url might be a specific person favorited or repeated and the
|
||||
followers url object_url is typically the url of the message,
|
||||
corresponding to url or atomUri in createPostBase
|
||||
"""
|
||||
if not urlPermitted(objectUrl, federationList):
|
||||
if not url_permitted(object_url, federation_list):
|
||||
return None
|
||||
|
||||
domain = removeDomainPort(domain)
|
||||
fullDomain = getFullDomain(domain, port)
|
||||
domain = remove_domain_port(domain)
|
||||
full_domain = get_full_domain(domain, port)
|
||||
|
||||
statusNumber, published = getStatusNumber()
|
||||
newAnnounceId = httpPrefix + '://' + fullDomain + \
|
||||
'/users/' + nickname + '/statuses/' + statusNumber
|
||||
atomUriStr = localActorUrl(httpPrefix, nickname, fullDomain) + \
|
||||
'/statuses/' + statusNumber
|
||||
newAnnounce = {
|
||||
status_number, published = get_status_number()
|
||||
new_announce_id = http_prefix + '://' + full_domain + \
|
||||
'/users/' + nickname + '/statuses/' + status_number
|
||||
atom_uri_str = local_actor_url(http_prefix, nickname, full_domain) + \
|
||||
'/statuses/' + status_number
|
||||
new_announce = {
|
||||
"@context": "https://www.w3.org/ns/activitystreams",
|
||||
'actor': localActorUrl(httpPrefix, nickname, fullDomain),
|
||||
'atomUri': atomUriStr,
|
||||
'actor': local_actor_url(http_prefix, nickname, full_domain),
|
||||
'atomUri': atom_uri_str,
|
||||
'cc': [],
|
||||
'id': newAnnounceId + '/activity',
|
||||
'object': objectUrl,
|
||||
'id': new_announce_id + '/activity',
|
||||
'object': object_url,
|
||||
'published': published,
|
||||
'to': [toUrl],
|
||||
'to': [to_url],
|
||||
'type': 'Announce'
|
||||
}
|
||||
if ccUrl:
|
||||
if len(ccUrl) > 0:
|
||||
newAnnounce['cc'] = [ccUrl]
|
||||
if saveToFile:
|
||||
outboxDir = createOutboxDir(nickname, domain, baseDir)
|
||||
filename = outboxDir + '/' + newAnnounceId.replace('/', '#') + '.json'
|
||||
saveJson(newAnnounce, filename)
|
||||
if cc_url:
|
||||
if len(cc_url) > 0:
|
||||
new_announce['cc'] = [cc_url]
|
||||
if save_to_file:
|
||||
outbox_dir = create_outbox_dir(nickname, domain, base_dir)
|
||||
filename = \
|
||||
outbox_dir + '/' + new_announce_id.replace('/', '#') + '.json'
|
||||
save_json(new_announce, filename)
|
||||
|
||||
announceNickname = None
|
||||
announceDomain = None
|
||||
announcePort = None
|
||||
groupAccount = False
|
||||
if hasUsersPath(objectUrl):
|
||||
announceNickname = getNicknameFromActor(objectUrl)
|
||||
announceDomain, announcePort = getDomainFromActor(objectUrl)
|
||||
if '/' + str(announceNickname) + '/' in objectUrl:
|
||||
announceActor = \
|
||||
objectUrl.split('/' + announceNickname + '/')[0] + \
|
||||
'/' + announceNickname
|
||||
if hasGroupType(baseDir, announceActor, personCache):
|
||||
groupAccount = True
|
||||
announce_nickname = None
|
||||
announce_domain = None
|
||||
announce_port = None
|
||||
group_account = False
|
||||
if has_users_path(object_url):
|
||||
announce_nickname = get_nickname_from_actor(object_url)
|
||||
if announce_nickname:
|
||||
announce_domain, announce_port = get_domain_from_actor(object_url)
|
||||
if '/' + str(announce_nickname) + '/' in object_url:
|
||||
announce_actor = \
|
||||
object_url.split('/' + announce_nickname + '/')[0] + \
|
||||
'/' + announce_nickname
|
||||
if has_group_type(base_dir, announce_actor, person_cache):
|
||||
group_account = True
|
||||
|
||||
if announceNickname and announceDomain:
|
||||
sendSignedJson(newAnnounce, session, baseDir,
|
||||
nickname, domain, port,
|
||||
announceNickname, announceDomain, announcePort, None,
|
||||
httpPrefix, True, clientToServer, federationList,
|
||||
sendThreads, postLog, cachedWebfingers, personCache,
|
||||
debug, projectVersion, None, groupAccount,
|
||||
signingPrivateKeyPem, 639633)
|
||||
if announce_nickname and announce_domain:
|
||||
send_signed_json(new_announce, session, base_dir,
|
||||
nickname, domain, port,
|
||||
announce_nickname, announce_domain,
|
||||
announce_port,
|
||||
http_prefix, client_to_server, federation_list,
|
||||
send_threads, post_log, cached_webfingers,
|
||||
person_cache,
|
||||
debug, project_version, None, group_account,
|
||||
signing_priv_key_pem, 639633,
|
||||
curr_domain, onion_domain, i2p_domain)
|
||||
|
||||
return newAnnounce
|
||||
return new_announce
|
||||
|
||||
|
||||
def announcePublic(session, baseDir: str, federationList: [],
|
||||
nickname: str, domain: str, port: int, httpPrefix: str,
|
||||
objectUrl: str, clientToServer: bool,
|
||||
sendThreads: [], postLog: [],
|
||||
personCache: {}, cachedWebfingers: {},
|
||||
debug: bool, projectVersion: str,
|
||||
signingPrivateKeyPem: str) -> {}:
|
||||
def announce_public(session, base_dir: str, federation_list: [],
|
||||
nickname: str, domain: str, port: int, http_prefix: str,
|
||||
object_url: str, client_to_server: bool,
|
||||
send_threads: [], post_log: [],
|
||||
person_cache: {}, cached_webfingers: {},
|
||||
debug: bool, project_version: str,
|
||||
signing_priv_key_pem: str,
|
||||
curr_domain: str,
|
||||
onion_domain: str, i2p_domain: str) -> {}:
|
||||
"""Makes a public announcement
|
||||
"""
|
||||
fromDomain = getFullDomain(domain, port)
|
||||
from_domain = get_full_domain(domain, port)
|
||||
|
||||
toUrl = 'https://www.w3.org/ns/activitystreams#Public'
|
||||
ccUrl = localActorUrl(httpPrefix, nickname, fromDomain) + '/followers'
|
||||
return createAnnounce(session, baseDir, federationList,
|
||||
nickname, domain, port,
|
||||
toUrl, ccUrl, httpPrefix,
|
||||
objectUrl, True, clientToServer,
|
||||
sendThreads, postLog,
|
||||
personCache, cachedWebfingers,
|
||||
debug, projectVersion,
|
||||
signingPrivateKeyPem)
|
||||
to_url = 'https://www.w3.org/ns/activitystreams#Public'
|
||||
cc_url = local_actor_url(http_prefix, nickname, from_domain) + '/followers'
|
||||
return create_announce(session, base_dir, federation_list,
|
||||
nickname, domain, port,
|
||||
to_url, cc_url, http_prefix,
|
||||
object_url, True, client_to_server,
|
||||
send_threads, post_log,
|
||||
person_cache, cached_webfingers,
|
||||
debug, project_version,
|
||||
signing_priv_key_pem, curr_domain,
|
||||
onion_domain, i2p_domain)
|
||||
|
||||
|
||||
def sendAnnounceViaServer(baseDir: str, session,
|
||||
fromNickname: str, password: str,
|
||||
fromDomain: str, fromPort: int,
|
||||
httpPrefix: str, repeatObjectUrl: str,
|
||||
cachedWebfingers: {}, personCache: {},
|
||||
debug: bool, projectVersion: str,
|
||||
signingPrivateKeyPem: str) -> {}:
|
||||
def send_announce_via_server(base_dir: str, session,
|
||||
from_nickname: str, password: str,
|
||||
from_domain: str, from_port: int,
|
||||
http_prefix: str, repeat_object_url: str,
|
||||
cached_webfingers: {}, person_cache: {},
|
||||
debug: bool, project_version: str,
|
||||
signing_priv_key_pem: str) -> {}:
|
||||
"""Creates an announce message via c2s
|
||||
"""
|
||||
if not session:
|
||||
print('WARN: No session for sendAnnounceViaServer')
|
||||
print('WARN: No session for send_announce_via_server')
|
||||
return 6
|
||||
|
||||
fromDomainFull = getFullDomain(fromDomain, fromPort)
|
||||
from_domain_full = get_full_domain(from_domain, from_port)
|
||||
|
||||
toUrl = 'https://www.w3.org/ns/activitystreams#Public'
|
||||
actorStr = localActorUrl(httpPrefix, fromNickname, fromDomainFull)
|
||||
ccUrl = actorStr + '/followers'
|
||||
to_url = 'https://www.w3.org/ns/activitystreams#Public'
|
||||
actor_str = local_actor_url(http_prefix, from_nickname, from_domain_full)
|
||||
cc_url = actor_str + '/followers'
|
||||
|
||||
statusNumber, published = getStatusNumber()
|
||||
newAnnounceId = actorStr + '/statuses/' + statusNumber
|
||||
newAnnounceJson = {
|
||||
status_number, published = get_status_number()
|
||||
new_announce_id = actor_str + '/statuses/' + status_number
|
||||
new_announce_json = {
|
||||
"@context": "https://www.w3.org/ns/activitystreams",
|
||||
'actor': actorStr,
|
||||
'atomUri': newAnnounceId,
|
||||
'cc': [ccUrl],
|
||||
'id': newAnnounceId + '/activity',
|
||||
'object': repeatObjectUrl,
|
||||
'actor': actor_str,
|
||||
'atomUri': new_announce_id,
|
||||
'cc': [cc_url],
|
||||
'id': new_announce_id + '/activity',
|
||||
'object': repeat_object_url,
|
||||
'published': published,
|
||||
'to': [toUrl],
|
||||
'to': [to_url],
|
||||
'type': 'Announce'
|
||||
}
|
||||
|
||||
handle = httpPrefix + '://' + fromDomainFull + '/@' + fromNickname
|
||||
handle = http_prefix + '://' + from_domain_full + '/@' + from_nickname
|
||||
|
||||
# lookup the inbox for the To handle
|
||||
wfRequest = webfingerHandle(session, handle, httpPrefix,
|
||||
cachedWebfingers,
|
||||
fromDomain, projectVersion, debug, False,
|
||||
signingPrivateKeyPem)
|
||||
if not wfRequest:
|
||||
wf_request = webfinger_handle(session, handle, http_prefix,
|
||||
cached_webfingers,
|
||||
from_domain, project_version, debug, False,
|
||||
signing_priv_key_pem)
|
||||
if not wf_request:
|
||||
if debug:
|
||||
print('DEBUG: announce webfinger failed for ' + handle)
|
||||
return 1
|
||||
if not isinstance(wfRequest, dict):
|
||||
if not isinstance(wf_request, dict):
|
||||
print('WARN: announce webfinger for ' + handle +
|
||||
' did not return a dict. ' + str(wfRequest))
|
||||
' did not return a dict. ' + str(wf_request))
|
||||
return 1
|
||||
|
||||
postToBox = 'outbox'
|
||||
post_to_box = 'outbox'
|
||||
|
||||
# get the actor inbox for the To handle
|
||||
originDomain = fromDomain
|
||||
(inboxUrl, pubKeyId, pubKey, fromPersonId,
|
||||
sharedInbox, avatarUrl,
|
||||
displayName, _) = getPersonBox(signingPrivateKeyPem,
|
||||
originDomain,
|
||||
baseDir, session, wfRequest,
|
||||
personCache,
|
||||
projectVersion, httpPrefix,
|
||||
fromNickname, fromDomain,
|
||||
postToBox, 73528)
|
||||
origin_domain = from_domain
|
||||
(inbox_url, _, _, from_person_id,
|
||||
_, _, _, _) = get_person_box(signing_priv_key_pem,
|
||||
origin_domain,
|
||||
base_dir, session, wf_request,
|
||||
person_cache,
|
||||
project_version, http_prefix,
|
||||
from_nickname, from_domain,
|
||||
post_to_box, 73528)
|
||||
|
||||
if not inboxUrl:
|
||||
if not inbox_url:
|
||||
if debug:
|
||||
print('DEBUG: announce no ' + postToBox +
|
||||
print('DEBUG: announce no ' + post_to_box +
|
||||
' was found for ' + handle)
|
||||
return 3
|
||||
if not fromPersonId:
|
||||
if not from_person_id:
|
||||
if debug:
|
||||
print('DEBUG: announce no actor was found for ' + handle)
|
||||
return 4
|
||||
|
||||
authHeader = createBasicAuthHeader(fromNickname, password)
|
||||
auth_header = create_basic_auth_header(from_nickname, password)
|
||||
|
||||
headers = {
|
||||
'host': fromDomain,
|
||||
'host': from_domain,
|
||||
'Content-type': 'application/json',
|
||||
'Authorization': authHeader
|
||||
'Authorization': auth_header
|
||||
}
|
||||
postResult = postJson(httpPrefix, fromDomainFull,
|
||||
session, newAnnounceJson, [], inboxUrl,
|
||||
headers, 3, True)
|
||||
if not postResult:
|
||||
post_result = post_json(http_prefix, from_domain_full,
|
||||
session, new_announce_json, [], inbox_url,
|
||||
headers, 3, True)
|
||||
if not post_result:
|
||||
print('WARN: announce not posted')
|
||||
|
||||
if debug:
|
||||
print('DEBUG: c2s POST announce success')
|
||||
|
||||
return newAnnounceJson
|
||||
return new_announce_json
|
||||
|
||||
|
||||
def sendUndoAnnounceViaServer(baseDir: str, session,
|
||||
undoPostJsonObject: {},
|
||||
nickname: str, password: str,
|
||||
domain: str, port: int,
|
||||
httpPrefix: str, repeatObjectUrl: str,
|
||||
cachedWebfingers: {}, personCache: {},
|
||||
debug: bool, projectVersion: str,
|
||||
signingPrivateKeyPem: str) -> {}:
|
||||
def send_undo_announce_via_server(base_dir: str, session,
|
||||
undo_post_json_object: {},
|
||||
nickname: str, password: str,
|
||||
domain: str, port: int, http_prefix: str,
|
||||
cached_webfingers: {}, person_cache: {},
|
||||
debug: bool, project_version: str,
|
||||
signing_priv_key_pem: str) -> {}:
|
||||
"""Undo an announce message via c2s
|
||||
"""
|
||||
if not session:
|
||||
print('WARN: No session for sendUndoAnnounceViaServer')
|
||||
print('WARN: No session for send_undo_announce_via_server')
|
||||
return 6
|
||||
|
||||
domainFull = getFullDomain(domain, port)
|
||||
domain_full = get_full_domain(domain, port)
|
||||
|
||||
actor = localActorUrl(httpPrefix, nickname, domainFull)
|
||||
handle = replaceUsersWithAt(actor)
|
||||
actor = local_actor_url(http_prefix, nickname, domain_full)
|
||||
handle = replace_users_with_at(actor)
|
||||
|
||||
statusNumber, published = getStatusNumber()
|
||||
unAnnounceJson = {
|
||||
status_number, _ = get_status_number()
|
||||
unannounce_json = {
|
||||
'@context': 'https://www.w3.org/ns/activitystreams',
|
||||
'id': actor + '/statuses/' + str(statusNumber) + '/undo',
|
||||
'id': actor + '/statuses/' + str(status_number) + '/undo',
|
||||
'type': 'Undo',
|
||||
'actor': actor,
|
||||
'object': undoPostJsonObject['object']
|
||||
'object': undo_post_json_object['object']
|
||||
}
|
||||
|
||||
# lookup the inbox for the To handle
|
||||
wfRequest = webfingerHandle(session, handle, httpPrefix,
|
||||
cachedWebfingers,
|
||||
domain, projectVersion, debug, False,
|
||||
signingPrivateKeyPem)
|
||||
if not wfRequest:
|
||||
wf_request = webfinger_handle(session, handle, http_prefix,
|
||||
cached_webfingers,
|
||||
domain, project_version, debug, False,
|
||||
signing_priv_key_pem)
|
||||
if not wf_request:
|
||||
if debug:
|
||||
print('DEBUG: undo announce webfinger failed for ' + handle)
|
||||
return 1
|
||||
if not isinstance(wfRequest, dict):
|
||||
if not isinstance(wf_request, dict):
|
||||
print('WARN: undo announce webfinger for ' + handle +
|
||||
' did not return a dict. ' + str(wfRequest))
|
||||
' did not return a dict. ' + str(wf_request))
|
||||
return 1
|
||||
|
||||
postToBox = 'outbox'
|
||||
post_to_box = 'outbox'
|
||||
|
||||
# get the actor inbox for the To handle
|
||||
originDomain = domain
|
||||
(inboxUrl, pubKeyId, pubKey, fromPersonId,
|
||||
sharedInbox, avatarUrl,
|
||||
displayName, _) = getPersonBox(signingPrivateKeyPem,
|
||||
originDomain,
|
||||
baseDir, session, wfRequest,
|
||||
personCache,
|
||||
projectVersion, httpPrefix,
|
||||
nickname, domain,
|
||||
postToBox, 73528)
|
||||
origin_domain = domain
|
||||
(inbox_url, _, _, from_person_id,
|
||||
_, _, _, _) = get_person_box(signing_priv_key_pem,
|
||||
origin_domain,
|
||||
base_dir, session, wf_request,
|
||||
person_cache,
|
||||
project_version, http_prefix,
|
||||
nickname, domain,
|
||||
post_to_box, 73528)
|
||||
|
||||
if not inboxUrl:
|
||||
if not inbox_url:
|
||||
if debug:
|
||||
print('DEBUG: undo announce no ' + postToBox +
|
||||
print('DEBUG: undo announce no ' + post_to_box +
|
||||
' was found for ' + handle)
|
||||
return 3
|
||||
if not fromPersonId:
|
||||
if not from_person_id:
|
||||
if debug:
|
||||
print('DEBUG: undo announce no actor was found for ' + handle)
|
||||
return 4
|
||||
|
||||
authHeader = createBasicAuthHeader(nickname, password)
|
||||
auth_header = create_basic_auth_header(nickname, password)
|
||||
|
||||
headers = {
|
||||
'host': domain,
|
||||
'Content-type': 'application/json',
|
||||
'Authorization': authHeader
|
||||
'Authorization': auth_header
|
||||
}
|
||||
postResult = postJson(httpPrefix, domainFull,
|
||||
session, unAnnounceJson, [], inboxUrl,
|
||||
headers, 3, True)
|
||||
if not postResult:
|
||||
post_result = post_json(http_prefix, domain_full,
|
||||
session, unannounce_json, [], inbox_url,
|
||||
headers, 3, True)
|
||||
if not post_result:
|
||||
print('WARN: undo announce not posted')
|
||||
|
||||
if debug:
|
||||
print('DEBUG: c2s POST undo announce success')
|
||||
|
||||
return unAnnounceJson
|
||||
return unannounce_json
|
||||
|
||||
|
||||
def outboxUndoAnnounce(recentPostsCache: {},
|
||||
baseDir: str, httpPrefix: str,
|
||||
nickname: str, domain: str, port: int,
|
||||
messageJson: {}, debug: bool) -> None:
|
||||
def outbox_undo_announce(recent_posts_cache: {},
|
||||
base_dir: str, nickname: str, domain: str,
|
||||
message_json: {}, debug: bool) -> None:
|
||||
""" When an undo announce is received by the outbox from c2s
|
||||
"""
|
||||
if not messageJson.get('type'):
|
||||
if not message_json.get('type'):
|
||||
return
|
||||
if not messageJson['type'] == 'Undo':
|
||||
if not message_json['type'] == 'Undo':
|
||||
return
|
||||
if not hasObjectDict(messageJson):
|
||||
if debug:
|
||||
print('DEBUG: undo like object is not dict')
|
||||
if not has_object_string_type(message_json, debug):
|
||||
return
|
||||
if not messageJson['object'].get('type'):
|
||||
if debug:
|
||||
print('DEBUG: undo like - no type')
|
||||
return
|
||||
if not messageJson['object']['type'] == 'Announce':
|
||||
if not message_json['object']['type'] == 'Announce':
|
||||
if debug:
|
||||
print('DEBUG: not a undo announce')
|
||||
return
|
||||
if not messageJson['object'].get('object'):
|
||||
if debug:
|
||||
print('DEBUG: no object in undo announce')
|
||||
return
|
||||
if not isinstance(messageJson['object']['object'], str):
|
||||
if debug:
|
||||
print('DEBUG: undo announce object is not string')
|
||||
if not has_object_string_object(message_json, debug):
|
||||
return
|
||||
if debug:
|
||||
print('DEBUG: c2s undo announce request arrived in outbox')
|
||||
|
||||
messageId = removeIdEnding(messageJson['object']['object'])
|
||||
domain = removeDomainPort(domain)
|
||||
postFilename = locatePost(baseDir, nickname, domain, messageId)
|
||||
if not postFilename:
|
||||
message_id = remove_id_ending(message_json['object']['object'])
|
||||
domain = remove_domain_port(domain)
|
||||
post_filename = locate_post(base_dir, nickname, domain, message_id)
|
||||
if not post_filename:
|
||||
if debug:
|
||||
print('DEBUG: c2s undo announce post not found in inbox or outbox')
|
||||
print(messageId)
|
||||
print(message_id)
|
||||
return True
|
||||
undoAnnounceCollectionEntry(recentPostsCache, baseDir, postFilename,
|
||||
messageJson['actor'], domain, debug)
|
||||
undo_announce_collection_entry(recent_posts_cache, base_dir, post_filename,
|
||||
message_json['actor'], domain, debug)
|
||||
if debug:
|
||||
print('DEBUG: post undo announce via c2s - ' + postFilename)
|
||||
print('DEBUG: post undo announce via c2s - ' + post_filename)
|
||||
|
|
Before Width: | Height: | Size: 50 KiB After Width: | Height: | Size: 67 KiB |
Before Width: | Height: | Size: 141 KiB After Width: | Height: | Size: 222 KiB |
Before Width: | Height: | Size: 95 KiB After Width: | Height: | Size: 124 KiB |
Before Width: | Height: | Size: 73 KiB After Width: | Height: | Size: 103 KiB |
Before Width: | Height: | Size: 49 KiB After Width: | Height: | Size: 75 KiB |
Before Width: | Height: | Size: 62 KiB After Width: | Height: | Size: 96 KiB |
Before Width: | Height: | Size: 55 KiB After Width: | Height: | Size: 78 KiB |
Before Width: | Height: | Size: 63 KiB After Width: | Height: | Size: 107 KiB |
Before Width: | Height: | Size: 147 KiB After Width: | Height: | Size: 191 KiB |
Before Width: | Height: | Size: 78 KiB After Width: | Height: | Size: 79 KiB |
Before Width: | Height: | Size: 74 KiB After Width: | Height: | Size: 120 KiB |
Before Width: | Height: | Size: 68 KiB After Width: | Height: | Size: 79 KiB |
Before Width: | Height: | Size: 166 KiB After Width: | Height: | Size: 263 KiB |
305
auth.py
|
@ -1,7 +1,7 @@
|
|||
__filename__ = "auth.py"
|
||||
__author__ = "Bob Mottram"
|
||||
__license__ = "AGPL3+"
|
||||
__version__ = "1.2.0"
|
||||
__version__ = "1.3.0"
|
||||
__maintainer__ = "Bob Mottram"
|
||||
__email__ = "bob@libreserver.org"
|
||||
__status__ = "Production"
|
||||
|
@ -13,11 +13,13 @@ import binascii
|
|||
import os
|
||||
import secrets
|
||||
import datetime
|
||||
from utils import isSystemAccount
|
||||
from utils import hasUsersPath
|
||||
from utils import is_system_account
|
||||
from utils import has_users_path
|
||||
from utils import text_in_file
|
||||
from utils import remove_eol
|
||||
|
||||
|
||||
def _hashPassword(password: str) -> str:
|
||||
def _hash_password(password: str) -> str:
|
||||
"""Hash a password for storing
|
||||
"""
|
||||
salt = hashlib.sha256(os.urandom(60)).hexdigest().encode('ascii')
|
||||
|
@ -28,17 +30,17 @@ def _hashPassword(password: str) -> str:
|
|||
return (salt + pwdhash).decode('ascii')
|
||||
|
||||
|
||||
def _getPasswordHash(salt: str, providedPassword: str) -> str:
|
||||
def _get_password_hash(salt: str, provided_password: str) -> str:
|
||||
"""Returns the hash of a password
|
||||
"""
|
||||
pwdhash = hashlib.pbkdf2_hmac('sha512',
|
||||
providedPassword.encode('utf-8'),
|
||||
provided_password.encode('utf-8'),
|
||||
salt.encode('ascii'),
|
||||
100000)
|
||||
return binascii.hexlify(pwdhash).decode('ascii')
|
||||
|
||||
|
||||
def constantTimeStringCheck(string1: str, string2: str) -> bool:
|
||||
def constant_time_string_check(string1: str, string2: str) -> bool:
|
||||
"""Compares two string and returns if they are the same
|
||||
using a constant amount of time
|
||||
See https://sqreen.github.io/DevelopersSecurityBestPractices/
|
||||
|
@ -49,8 +51,8 @@ def constantTimeStringCheck(string1: str, string2: str) -> bool:
|
|||
return False
|
||||
ctr = 0
|
||||
matched = True
|
||||
for ch in string1:
|
||||
if ch != string2[ctr]:
|
||||
for char in string1:
|
||||
if char != string2[ctr]:
|
||||
matched = False
|
||||
else:
|
||||
# this is to make the timing more even
|
||||
|
@ -60,199 +62,244 @@ def constantTimeStringCheck(string1: str, string2: str) -> bool:
|
|||
return matched
|
||||
|
||||
|
||||
def _verifyPassword(storedPassword: str, providedPassword: str) -> bool:
|
||||
def _verify_password(stored_password: str, provided_password: str) -> bool:
|
||||
"""Verify a stored password against one provided by user
|
||||
"""
|
||||
if not storedPassword:
|
||||
if not stored_password:
|
||||
return False
|
||||
if not providedPassword:
|
||||
if not provided_password:
|
||||
return False
|
||||
salt = storedPassword[:64]
|
||||
storedPassword = storedPassword[64:]
|
||||
pwHash = _getPasswordHash(salt, providedPassword)
|
||||
return constantTimeStringCheck(pwHash, storedPassword)
|
||||
salt = stored_password[:64]
|
||||
stored_password = stored_password[64:]
|
||||
pw_hash = _get_password_hash(salt, provided_password)
|
||||
return constant_time_string_check(pw_hash, stored_password)
|
||||
|
||||
|
||||
def createBasicAuthHeader(nickname: str, password: str) -> str:
|
||||
def create_basic_auth_header(nickname: str, password: str) -> str:
|
||||
"""This is only used by tests
|
||||
"""
|
||||
authStr = \
|
||||
nickname.replace('\n', '').replace('\r', '') + \
|
||||
auth_str = \
|
||||
remove_eol(nickname) + \
|
||||
':' + \
|
||||
password.replace('\n', '').replace('\r', '')
|
||||
return 'Basic ' + base64.b64encode(authStr.encode('utf-8')).decode('utf-8')
|
||||
remove_eol(password)
|
||||
return 'Basic ' + \
|
||||
base64.b64encode(auth_str.encode('utf-8')).decode('utf-8')
|
||||
|
||||
|
||||
def authorizeBasic(baseDir: str, path: str, authHeader: str,
|
||||
debug: bool) -> bool:
|
||||
def authorize_basic(base_dir: str, path: str, auth_header: str,
|
||||
debug: bool) -> bool:
|
||||
"""HTTP basic auth
|
||||
"""
|
||||
if ' ' not in authHeader:
|
||||
if ' ' not in auth_header:
|
||||
if debug:
|
||||
print('DEBUG: basic auth - Authorisation header does not ' +
|
||||
'contain a space character')
|
||||
return False
|
||||
if not hasUsersPath(path):
|
||||
if debug:
|
||||
print('DEBUG: basic auth - ' +
|
||||
'path for Authorization does not contain a user')
|
||||
return False
|
||||
pathUsersSection = path.split('/users/')[1]
|
||||
if '/' not in pathUsersSection:
|
||||
if debug:
|
||||
print('DEBUG: basic auth - this is not a users endpoint')
|
||||
return False
|
||||
nicknameFromPath = pathUsersSection.split('/')[0]
|
||||
if isSystemAccount(nicknameFromPath):
|
||||
if not has_users_path(path):
|
||||
if not path.startswith('/calendars/'):
|
||||
if debug:
|
||||
print('DEBUG: basic auth - ' +
|
||||
'path for Authorization does not contain a user')
|
||||
return False
|
||||
if path.startswith('/calendars/'):
|
||||
path_users_section = path.split('/calendars/')[1]
|
||||
nickname_from_path = path_users_section
|
||||
if '/' in nickname_from_path:
|
||||
nickname_from_path = nickname_from_path.split('/')[0]
|
||||
if '?' in nickname_from_path:
|
||||
nickname_from_path = nickname_from_path.split('?')[0]
|
||||
else:
|
||||
path_users_section = path.split('/users/')[1]
|
||||
if '/' not in path_users_section:
|
||||
if debug:
|
||||
print('DEBUG: basic auth - this is not a users endpoint')
|
||||
return False
|
||||
nickname_from_path = path_users_section.split('/')[0]
|
||||
if is_system_account(nickname_from_path):
|
||||
print('basic auth - attempted login using system account ' +
|
||||
nicknameFromPath + ' in path')
|
||||
nickname_from_path + ' in path')
|
||||
return False
|
||||
base64Str = \
|
||||
authHeader.split(' ')[1].replace('\n', '').replace('\r', '')
|
||||
plain = base64.b64decode(base64Str).decode('utf-8')
|
||||
base64_str1 = auth_header.split(' ')[1]
|
||||
base64_str = remove_eol(base64_str1)
|
||||
plain = base64.b64decode(base64_str).decode('utf-8')
|
||||
if ':' not in plain:
|
||||
if debug:
|
||||
print('DEBUG: basic auth header does not contain a ":" ' +
|
||||
'separator for username:password')
|
||||
return False
|
||||
nickname = plain.split(':')[0]
|
||||
if isSystemAccount(nickname):
|
||||
if is_system_account(nickname):
|
||||
print('basic auth - attempted login using system account ' + nickname +
|
||||
' in Auth header')
|
||||
return False
|
||||
if nickname != nicknameFromPath:
|
||||
if nickname != nickname_from_path:
|
||||
if debug:
|
||||
print('DEBUG: Nickname given in the path (' + nicknameFromPath +
|
||||
print('DEBUG: Nickname given in the path (' + nickname_from_path +
|
||||
') does not match the one in the Authorization header (' +
|
||||
nickname + ')')
|
||||
return False
|
||||
passwordFile = baseDir + '/accounts/passwords'
|
||||
if not os.path.isfile(passwordFile):
|
||||
password_file = base_dir + '/accounts/passwords'
|
||||
if not os.path.isfile(password_file):
|
||||
if debug:
|
||||
print('DEBUG: passwords file missing')
|
||||
return False
|
||||
providedPassword = plain.split(':')[1]
|
||||
with open(passwordFile, 'r') as passfile:
|
||||
for line in passfile:
|
||||
if not line.startswith(nickname + ':'):
|
||||
continue
|
||||
storedPassword = \
|
||||
line.split(':')[1].replace('\n', '').replace('\r', '')
|
||||
success = _verifyPassword(storedPassword, providedPassword)
|
||||
if not success:
|
||||
if debug:
|
||||
print('DEBUG: Password check failed for ' + nickname)
|
||||
return success
|
||||
provided_password = plain.split(':')[1]
|
||||
try:
|
||||
with open(password_file, 'r', encoding='utf-8') as passfile:
|
||||
for line in passfile:
|
||||
if not line.startswith(nickname + ':'):
|
||||
continue
|
||||
stored_password_base = line.split(':')[1]
|
||||
stored_password = remove_eol(stored_password_base)
|
||||
success = _verify_password(stored_password, provided_password)
|
||||
if not success:
|
||||
if debug:
|
||||
print('DEBUG: Password check failed for ' + nickname)
|
||||
return success
|
||||
except OSError:
|
||||
print('EX: failed to open password file')
|
||||
return False
|
||||
print('DEBUG: Did not find credentials for ' + nickname +
|
||||
' in ' + passwordFile)
|
||||
' in ' + password_file)
|
||||
return False
|
||||
|
||||
|
||||
def storeBasicCredentials(baseDir: str, nickname: str, password: str) -> bool:
|
||||
def store_basic_credentials(base_dir: str,
|
||||
nickname: str, password: str) -> bool:
|
||||
"""Stores login credentials to a file
|
||||
"""
|
||||
if ':' in nickname or ':' in password:
|
||||
return False
|
||||
nickname = nickname.replace('\n', '').replace('\r', '').strip()
|
||||
password = password.replace('\n', '').replace('\r', '').strip()
|
||||
nickname = remove_eol(nickname).strip()
|
||||
password = remove_eol(password).strip()
|
||||
|
||||
if not os.path.isdir(baseDir + '/accounts'):
|
||||
os.mkdir(baseDir + '/accounts')
|
||||
if not os.path.isdir(base_dir + '/accounts'):
|
||||
os.mkdir(base_dir + '/accounts')
|
||||
|
||||
passwordFile = baseDir + '/accounts/passwords'
|
||||
storeStr = nickname + ':' + _hashPassword(password)
|
||||
if os.path.isfile(passwordFile):
|
||||
if nickname + ':' in open(passwordFile).read():
|
||||
with open(passwordFile, 'r') as fin:
|
||||
with open(passwordFile + '.new', 'w+') as fout:
|
||||
for line in fin:
|
||||
if not line.startswith(nickname + ':'):
|
||||
fout.write(line)
|
||||
else:
|
||||
fout.write(storeStr + '\n')
|
||||
os.rename(passwordFile + '.new', passwordFile)
|
||||
password_file = base_dir + '/accounts/passwords'
|
||||
store_str = nickname + ':' + _hash_password(password)
|
||||
if os.path.isfile(password_file):
|
||||
if text_in_file(nickname + ':', password_file):
|
||||
try:
|
||||
with open(password_file, 'r', encoding='utf-8') as fin:
|
||||
with open(password_file + '.new', 'w+',
|
||||
encoding='utf-8') as fout:
|
||||
for line in fin:
|
||||
if not line.startswith(nickname + ':'):
|
||||
fout.write(line)
|
||||
else:
|
||||
fout.write(store_str + '\n')
|
||||
except OSError as ex:
|
||||
print('EX: unable to save password ' + password_file +
|
||||
' ' + str(ex))
|
||||
return False
|
||||
|
||||
try:
|
||||
os.rename(password_file + '.new', password_file)
|
||||
except OSError:
|
||||
print('EX: unable to save password 2')
|
||||
return False
|
||||
else:
|
||||
# append to password file
|
||||
with open(passwordFile, 'a+') as passfile:
|
||||
passfile.write(storeStr + '\n')
|
||||
try:
|
||||
with open(password_file, 'a+', encoding='utf-8') as passfile:
|
||||
passfile.write(store_str + '\n')
|
||||
except OSError:
|
||||
print('EX: unable to append password')
|
||||
return False
|
||||
else:
|
||||
with open(passwordFile, 'w+') as passfile:
|
||||
passfile.write(storeStr + '\n')
|
||||
try:
|
||||
with open(password_file, 'w+', encoding='utf-8') as passfile:
|
||||
passfile.write(store_str + '\n')
|
||||
except OSError:
|
||||
print('EX: unable to create password file')
|
||||
return False
|
||||
return True
|
||||
|
||||
|
||||
def removePassword(baseDir: str, nickname: str) -> None:
|
||||
def remove_password(base_dir: str, nickname: str) -> None:
|
||||
"""Removes the password entry for the given nickname
|
||||
This is called during account removal
|
||||
"""
|
||||
passwordFile = baseDir + '/accounts/passwords'
|
||||
if os.path.isfile(passwordFile):
|
||||
with open(passwordFile, 'r') as fin:
|
||||
with open(passwordFile + '.new', 'w+') as fout:
|
||||
for line in fin:
|
||||
if not line.startswith(nickname + ':'):
|
||||
fout.write(line)
|
||||
os.rename(passwordFile + '.new', passwordFile)
|
||||
password_file = base_dir + '/accounts/passwords'
|
||||
if os.path.isfile(password_file):
|
||||
try:
|
||||
with open(password_file, 'r', encoding='utf-8') as fin:
|
||||
with open(password_file + '.new', 'w+',
|
||||
encoding='utf-8') as fout:
|
||||
for line in fin:
|
||||
if not line.startswith(nickname + ':'):
|
||||
fout.write(line)
|
||||
except OSError as ex:
|
||||
print('EX: unable to remove password from file ' + str(ex))
|
||||
return
|
||||
|
||||
try:
|
||||
os.rename(password_file + '.new', password_file)
|
||||
except OSError:
|
||||
print('EX: unable to remove password from file 2')
|
||||
return
|
||||
|
||||
|
||||
def authorize(baseDir: str, path: str, authHeader: str, debug: bool) -> bool:
|
||||
def authorize(base_dir: str, path: str, auth_header: str, debug: bool) -> bool:
|
||||
"""Authorize using http header
|
||||
"""
|
||||
if authHeader.lower().startswith('basic '):
|
||||
return authorizeBasic(baseDir, path, authHeader, debug)
|
||||
if auth_header.lower().startswith('basic '):
|
||||
return authorize_basic(base_dir, path, auth_header, debug)
|
||||
return False
|
||||
|
||||
|
||||
def createPassword(length: int = 10):
|
||||
validChars = 'abcdefghijklmnopqrstuvwxyz' + \
|
||||
def create_password(length: int):
|
||||
valid_chars = 'abcdefghijklmnopqrstuvwxyz' + \
|
||||
'ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789'
|
||||
return ''.join((secrets.choice(validChars) for i in range(length)))
|
||||
return ''.join((secrets.choice(valid_chars) for i in range(length)))
|
||||
|
||||
|
||||
def recordLoginFailure(baseDir: str, ipAddress: str,
|
||||
countDict: {}, failTime: int,
|
||||
logToFile: bool) -> None:
|
||||
def record_login_failure(base_dir: str, ip_address: str,
|
||||
count_dict: {}, fail_time: int,
|
||||
log_to_file: bool) -> None:
|
||||
"""Keeps ip addresses and the number of times login failures
|
||||
occured for them in a dict
|
||||
"""
|
||||
if not countDict.get(ipAddress):
|
||||
while len(countDict.items()) > 100:
|
||||
oldestTime = 0
|
||||
oldestIP = None
|
||||
for ipAddr, ipItem in countDict.items():
|
||||
if oldestTime == 0 or ipItem['time'] < oldestTime:
|
||||
oldestTime = ipItem['time']
|
||||
oldestIP = ipAddr
|
||||
if oldestIP:
|
||||
del countDict[oldestIP]
|
||||
countDict[ipAddress] = {
|
||||
if not count_dict.get(ip_address):
|
||||
while len(count_dict.items()) > 100:
|
||||
oldest_time = 0
|
||||
oldest_ip = None
|
||||
for ip_addr, ip_item in count_dict.items():
|
||||
if oldest_time == 0 or ip_item['time'] < oldest_time:
|
||||
oldest_time = ip_item['time']
|
||||
oldest_ip = ip_addr
|
||||
if oldest_ip:
|
||||
del count_dict[oldest_ip]
|
||||
count_dict[ip_address] = {
|
||||
"count": 1,
|
||||
"time": failTime
|
||||
"time": fail_time
|
||||
}
|
||||
else:
|
||||
countDict[ipAddress]['count'] += 1
|
||||
countDict[ipAddress]['time'] = failTime
|
||||
failCount = countDict[ipAddress]['count']
|
||||
if failCount > 4:
|
||||
print('WARN: ' + str(ipAddress) + ' failed to log in ' +
|
||||
str(failCount) + ' times')
|
||||
count_dict[ip_address]['count'] += 1
|
||||
count_dict[ip_address]['time'] = fail_time
|
||||
fail_count = count_dict[ip_address]['count']
|
||||
if fail_count > 4:
|
||||
print('WARN: ' + str(ip_address) + ' failed to log in ' +
|
||||
str(fail_count) + ' times')
|
||||
|
||||
if not logToFile:
|
||||
if not log_to_file:
|
||||
return
|
||||
|
||||
failureLog = baseDir + '/accounts/loginfailures.log'
|
||||
writeType = 'a+'
|
||||
if not os.path.isfile(failureLog):
|
||||
writeType = 'w+'
|
||||
currTime = datetime.datetime.utcnow()
|
||||
failure_log = base_dir + '/accounts/loginfailures.log'
|
||||
write_type = 'a+'
|
||||
if not os.path.isfile(failure_log):
|
||||
write_type = 'w+'
|
||||
curr_time = datetime.datetime.utcnow()
|
||||
curr_time_str = curr_time.strftime("%Y-%m-%d %H:%M:%SZ")
|
||||
try:
|
||||
with open(failureLog, writeType) as fp:
|
||||
with open(failure_log, write_type, encoding='utf-8') as fp_fail:
|
||||
# here we use a similar format to an ssh log, so that
|
||||
# systems such as fail2ban can parse it
|
||||
fp.write(currTime.strftime("%Y-%m-%d %H:%M:%SZ") + ' ' +
|
||||
'ip-127-0-0-1 sshd[20710]: ' +
|
||||
'Disconnecting invalid user epicyon ' +
|
||||
ipAddress + ' port 443: ' +
|
||||
'Too many authentication failures [preauth]\n')
|
||||
except BaseException:
|
||||
pass
|
||||
fp_fail.write(curr_time_str + ' ' +
|
||||
'ip-127-0-0-1 sshd[20710]: ' +
|
||||
'Disconnecting invalid user epicyon ' +
|
||||
ip_address + ' port 443: ' +
|
||||
'Too many authentication failures [preauth]\n')
|
||||
except OSError:
|
||||
print('EX: record_login_failure failed ' + str(failure_log))
|
||||
|
|
168
availability.py
|
@ -1,160 +1,162 @@
|
|||
__filename__ = "availability.py"
|
||||
__author__ = "Bob Mottram"
|
||||
__license__ = "AGPL3+"
|
||||
__version__ = "1.2.0"
|
||||
__version__ = "1.3.0"
|
||||
__maintainer__ = "Bob Mottram"
|
||||
__email__ = "bob@libreserver.org"
|
||||
__status__ = "Production"
|
||||
__module_group__ = "Profile Metadata"
|
||||
|
||||
import os
|
||||
from webfinger import webfingerHandle
|
||||
from auth import createBasicAuthHeader
|
||||
from posts import getPersonBox
|
||||
from session import postJson
|
||||
from utils import getFullDomain
|
||||
from utils import getNicknameFromActor
|
||||
from utils import getDomainFromActor
|
||||
from utils import loadJson
|
||||
from utils import saveJson
|
||||
from utils import acctDir
|
||||
from utils import localActorUrl
|
||||
from webfinger import webfinger_handle
|
||||
from auth import create_basic_auth_header
|
||||
from posts import get_person_box
|
||||
from session import post_json
|
||||
from utils import has_object_string
|
||||
from utils import get_full_domain
|
||||
from utils import get_nickname_from_actor
|
||||
from utils import get_domain_from_actor
|
||||
from utils import load_json
|
||||
from utils import save_json
|
||||
from utils import acct_dir
|
||||
from utils import local_actor_url
|
||||
from utils import has_actor
|
||||
|
||||
|
||||
def setAvailability(baseDir: str, nickname: str, domain: str,
|
||||
status: str) -> bool:
|
||||
def set_availability(base_dir: str, nickname: str, domain: str,
|
||||
status: str) -> bool:
|
||||
"""Set an availability status
|
||||
"""
|
||||
# avoid giant strings
|
||||
if len(status) > 128:
|
||||
return False
|
||||
actorFilename = acctDir(baseDir, nickname, domain) + '.json'
|
||||
if not os.path.isfile(actorFilename):
|
||||
actor_filename = acct_dir(base_dir, nickname, domain) + '.json'
|
||||
if not os.path.isfile(actor_filename):
|
||||
return False
|
||||
actorJson = loadJson(actorFilename)
|
||||
if actorJson:
|
||||
actorJson['availability'] = status
|
||||
saveJson(actorJson, actorFilename)
|
||||
actor_json = load_json(actor_filename)
|
||||
if actor_json:
|
||||
actor_json['availability'] = status
|
||||
save_json(actor_json, actor_filename)
|
||||
return True
|
||||
|
||||
|
||||
def getAvailability(baseDir: str, nickname: str, domain: str) -> str:
|
||||
def get_availability(base_dir: str, nickname: str, domain: str) -> str:
|
||||
"""Returns the availability for a given person
|
||||
"""
|
||||
actorFilename = acctDir(baseDir, nickname, domain) + '.json'
|
||||
if not os.path.isfile(actorFilename):
|
||||
actor_filename = acct_dir(base_dir, nickname, domain) + '.json'
|
||||
if not os.path.isfile(actor_filename):
|
||||
return False
|
||||
actorJson = loadJson(actorFilename)
|
||||
if actorJson:
|
||||
if not actorJson.get('availability'):
|
||||
actor_json = load_json(actor_filename)
|
||||
if actor_json:
|
||||
if not actor_json.get('availability'):
|
||||
return None
|
||||
return actorJson['availability']
|
||||
return actor_json['availability']
|
||||
return None
|
||||
|
||||
|
||||
def outboxAvailability(baseDir: str, nickname: str, messageJson: {},
|
||||
debug: bool) -> bool:
|
||||
def outbox_availability(base_dir: str, nickname: str, message_json: {},
|
||||
debug: bool) -> bool:
|
||||
"""Handles receiving an availability update
|
||||
"""
|
||||
if not messageJson.get('type'):
|
||||
if not message_json.get('type'):
|
||||
return False
|
||||
if not messageJson['type'] == 'Availability':
|
||||
if not message_json['type'] == 'Availability':
|
||||
return False
|
||||
if not messageJson.get('actor'):
|
||||
if not has_actor(message_json, debug):
|
||||
return False
|
||||
if not messageJson.get('object'):
|
||||
return False
|
||||
if not isinstance(messageJson['object'], str):
|
||||
if not has_object_string(message_json, debug):
|
||||
return False
|
||||
|
||||
actorNickname = getNicknameFromActor(messageJson['actor'])
|
||||
if actorNickname != nickname:
|
||||
actor_nickname = get_nickname_from_actor(message_json['actor'])
|
||||
if not actor_nickname:
|
||||
return False
|
||||
domain, port = getDomainFromActor(messageJson['actor'])
|
||||
status = messageJson['object'].replace('"', '')
|
||||
if actor_nickname != nickname:
|
||||
return False
|
||||
domain, _ = get_domain_from_actor(message_json['actor'])
|
||||
status = message_json['object'].replace('"', '')
|
||||
|
||||
return setAvailability(baseDir, nickname, domain, status)
|
||||
return set_availability(base_dir, nickname, domain, status)
|
||||
|
||||
|
||||
def sendAvailabilityViaServer(baseDir: str, session,
|
||||
nickname: str, password: str,
|
||||
domain: str, port: int,
|
||||
httpPrefix: str,
|
||||
status: str,
|
||||
cachedWebfingers: {}, personCache: {},
|
||||
debug: bool, projectVersion: str,
|
||||
signingPrivateKeyPem: str) -> {}:
|
||||
def send_availability_via_server(base_dir: str, session,
|
||||
nickname: str, password: str,
|
||||
domain: str, port: int,
|
||||
http_prefix: str,
|
||||
status: str,
|
||||
cached_webfingers: {}, person_cache: {},
|
||||
debug: bool, project_version: str,
|
||||
signing_priv_key_pem: str) -> {}:
|
||||
"""Sets the availability for a person via c2s
|
||||
"""
|
||||
if not session:
|
||||
print('WARN: No session for sendAvailabilityViaServer')
|
||||
print('WARN: No session for send_availability_via_server')
|
||||
return 6
|
||||
|
||||
domainFull = getFullDomain(domain, port)
|
||||
domain_full = get_full_domain(domain, port)
|
||||
|
||||
toUrl = localActorUrl(httpPrefix, nickname, domainFull)
|
||||
ccUrl = toUrl + '/followers'
|
||||
to_url = local_actor_url(http_prefix, nickname, domain_full)
|
||||
cc_url = to_url + '/followers'
|
||||
|
||||
newAvailabilityJson = {
|
||||
new_availability_json = {
|
||||
'type': 'Availability',
|
||||
'actor': toUrl,
|
||||
'actor': to_url,
|
||||
'object': '"' + status + '"',
|
||||
'to': [toUrl],
|
||||
'cc': [ccUrl]
|
||||
'to': [to_url],
|
||||
'cc': [cc_url]
|
||||
}
|
||||
|
||||
handle = httpPrefix + '://' + domainFull + '/@' + nickname
|
||||
handle = http_prefix + '://' + domain_full + '/@' + nickname
|
||||
|
||||
# lookup the inbox for the To handle
|
||||
wfRequest = webfingerHandle(session, handle, httpPrefix,
|
||||
cachedWebfingers,
|
||||
domain, projectVersion, debug, False,
|
||||
signingPrivateKeyPem)
|
||||
if not wfRequest:
|
||||
wf_request = webfinger_handle(session, handle, http_prefix,
|
||||
cached_webfingers,
|
||||
domain, project_version, debug, False,
|
||||
signing_priv_key_pem)
|
||||
if not wf_request:
|
||||
if debug:
|
||||
print('DEBUG: availability webfinger failed for ' + handle)
|
||||
return 1
|
||||
if not isinstance(wfRequest, dict):
|
||||
if not isinstance(wf_request, dict):
|
||||
print('WARN: availability webfinger for ' + handle +
|
||||
' did not return a dict. ' + str(wfRequest))
|
||||
' did not return a dict. ' + str(wf_request))
|
||||
return 1
|
||||
|
||||
postToBox = 'outbox'
|
||||
post_to_box = 'outbox'
|
||||
|
||||
# get the actor inbox for the To handle
|
||||
originDomain = domain
|
||||
(inboxUrl, pubKeyId, pubKey, fromPersonId, sharedInbox, avatarUrl,
|
||||
displayName, _) = getPersonBox(signingPrivateKeyPem,
|
||||
originDomain,
|
||||
baseDir, session, wfRequest,
|
||||
personCache, projectVersion,
|
||||
httpPrefix, nickname,
|
||||
domain, postToBox, 57262)
|
||||
origin_domain = domain
|
||||
(inbox_url, _, _, from_person_id, _, _,
|
||||
_, _) = get_person_box(signing_priv_key_pem,
|
||||
origin_domain,
|
||||
base_dir, session, wf_request,
|
||||
person_cache, project_version,
|
||||
http_prefix, nickname,
|
||||
domain, post_to_box, 57262)
|
||||
|
||||
if not inboxUrl:
|
||||
if not inbox_url:
|
||||
if debug:
|
||||
print('DEBUG: availability no ' + postToBox +
|
||||
print('DEBUG: availability no ' + post_to_box +
|
||||
' was found for ' + handle)
|
||||
return 3
|
||||
if not fromPersonId:
|
||||
if not from_person_id:
|
||||
if debug:
|
||||
print('DEBUG: availability no actor was found for ' + handle)
|
||||
return 4
|
||||
|
||||
authHeader = createBasicAuthHeader(nickname, password)
|
||||
auth_header = create_basic_auth_header(nickname, password)
|
||||
|
||||
headers = {
|
||||
'host': domain,
|
||||
'Content-type': 'application/json',
|
||||
'Authorization': authHeader
|
||||
'Authorization': auth_header
|
||||
}
|
||||
postResult = postJson(httpPrefix, domainFull,
|
||||
session, newAvailabilityJson, [],
|
||||
inboxUrl, headers, 30, True)
|
||||
if not postResult:
|
||||
post_result = post_json(http_prefix, domain_full,
|
||||
session, new_availability_json, [],
|
||||
inbox_url, headers, 30, True)
|
||||
if not post_result:
|
||||
print('WARN: availability failed to post')
|
||||
|
||||
if debug:
|
||||
print('DEBUG: c2s POST availability success')
|
||||
|
||||
return newAvailabilityJson
|
||||
return new_availability_json
|
||||
|
|
1282
blocking.py
787
bookmarks.py
137
briar.py
|
@ -1,104 +1,129 @@
|
|||
__filename__ = "briar.py"
|
||||
__author__ = "Bob Mottram"
|
||||
__license__ = "AGPL3+"
|
||||
__version__ = "1.2.0"
|
||||
__version__ = "1.3.0"
|
||||
__maintainer__ = "Bob Mottram"
|
||||
__email__ = "bob@libreserver.org"
|
||||
__status__ = "Production"
|
||||
__module_group__ = "Profile Metadata"
|
||||
|
||||
|
||||
def getBriarAddress(actorJson: {}) -> str:
|
||||
from utils import get_attachment_property_value
|
||||
|
||||
|
||||
def get_briar_address(actor_json: {}) -> str:
|
||||
"""Returns briar address for the given actor
|
||||
"""
|
||||
if not actorJson.get('attachment'):
|
||||
if not actor_json.get('attachment'):
|
||||
return ''
|
||||
for propertyValue in actorJson['attachment']:
|
||||
if not propertyValue.get('name'):
|
||||
for property_value in actor_json['attachment']:
|
||||
name_value = None
|
||||
if property_value.get('name'):
|
||||
name_value = property_value['name']
|
||||
elif property_value.get('schema:name'):
|
||||
name_value = property_value['schema:name']
|
||||
if not name_value:
|
||||
continue
|
||||
if not propertyValue['name'].lower().startswith('briar'):
|
||||
if not name_value.lower().startswith('briar'):
|
||||
continue
|
||||
if not propertyValue.get('type'):
|
||||
if not property_value.get('type'):
|
||||
continue
|
||||
if not propertyValue.get('value'):
|
||||
prop_value_name, prop_value = \
|
||||
get_attachment_property_value(property_value)
|
||||
if not prop_value:
|
||||
continue
|
||||
if propertyValue['type'] != 'PropertyValue':
|
||||
if not property_value['type'].endswith('PropertyValue'):
|
||||
continue
|
||||
propertyValue['value'] = propertyValue['value'].strip()
|
||||
if len(propertyValue['value']) < 50:
|
||||
property_value[prop_value_name] = prop_value.strip()
|
||||
if len(property_value[prop_value_name]) < 50:
|
||||
continue
|
||||
if not propertyValue['value'].startswith('briar://'):
|
||||
if not property_value[prop_value_name].startswith('briar://'):
|
||||
continue
|
||||
if propertyValue['value'].lower() != propertyValue['value']:
|
||||
if property_value[prop_value_name].lower() != \
|
||||
property_value[prop_value_name]:
|
||||
continue
|
||||
if '"' in propertyValue['value']:
|
||||
if '"' in property_value[prop_value_name]:
|
||||
continue
|
||||
if ' ' in propertyValue['value']:
|
||||
if ' ' in property_value[prop_value_name]:
|
||||
continue
|
||||
if ',' in propertyValue['value']:
|
||||
if ',' in property_value[prop_value_name]:
|
||||
continue
|
||||
if '.' in propertyValue['value']:
|
||||
if '.' in property_value[prop_value_name]:
|
||||
continue
|
||||
return propertyValue['value']
|
||||
return property_value[prop_value_name]
|
||||
return ''
|
||||
|
||||
|
||||
def setBriarAddress(actorJson: {}, briarAddress: str) -> None:
|
||||
def set_briar_address(actor_json: {}, briar_address: str) -> None:
|
||||
"""Sets an briar address for the given actor
|
||||
"""
|
||||
notBriarAddress = False
|
||||
not_briar_address = False
|
||||
|
||||
if len(briarAddress) < 50:
|
||||
notBriarAddress = True
|
||||
if not briarAddress.startswith('briar://'):
|
||||
notBriarAddress = True
|
||||
if briarAddress.lower() != briarAddress:
|
||||
notBriarAddress = True
|
||||
if '"' in briarAddress:
|
||||
notBriarAddress = True
|
||||
if ' ' in briarAddress:
|
||||
notBriarAddress = True
|
||||
if '.' in briarAddress:
|
||||
notBriarAddress = True
|
||||
if ',' in briarAddress:
|
||||
notBriarAddress = True
|
||||
if '<' in briarAddress:
|
||||
notBriarAddress = True
|
||||
if len(briar_address) < 50:
|
||||
not_briar_address = True
|
||||
if not briar_address.startswith('briar://'):
|
||||
not_briar_address = True
|
||||
if briar_address.lower() != briar_address:
|
||||
not_briar_address = True
|
||||
if '"' in briar_address:
|
||||
not_briar_address = True
|
||||
if ' ' in briar_address:
|
||||
not_briar_address = True
|
||||
if '.' in briar_address:
|
||||
not_briar_address = True
|
||||
if ',' in briar_address:
|
||||
not_briar_address = True
|
||||
if '<' in briar_address:
|
||||
not_briar_address = True
|
||||
|
||||
if not actorJson.get('attachment'):
|
||||
actorJson['attachment'] = []
|
||||
if not actor_json.get('attachment'):
|
||||
actor_json['attachment'] = []
|
||||
|
||||
# remove any existing value
|
||||
propertyFound = None
|
||||
for propertyValue in actorJson['attachment']:
|
||||
if not propertyValue.get('name'):
|
||||
property_found = None
|
||||
for property_value in actor_json['attachment']:
|
||||
name_value = None
|
||||
if property_value.get('name'):
|
||||
name_value = property_value['name']
|
||||
elif property_value.get('schema:name'):
|
||||
name_value = property_value['schema:name']
|
||||
if not name_value:
|
||||
continue
|
||||
if not propertyValue.get('type'):
|
||||
if not property_value.get('type'):
|
||||
continue
|
||||
if not propertyValue['name'].lower().startswith('briar'):
|
||||
if not name_value.lower().startswith('briar'):
|
||||
continue
|
||||
propertyFound = propertyValue
|
||||
property_found = property_value
|
||||
break
|
||||
if propertyFound:
|
||||
actorJson['attachment'].remove(propertyFound)
|
||||
if notBriarAddress:
|
||||
if property_found:
|
||||
actor_json['attachment'].remove(property_found)
|
||||
if not_briar_address:
|
||||
return
|
||||
|
||||
for propertyValue in actorJson['attachment']:
|
||||
if not propertyValue.get('name'):
|
||||
for property_value in actor_json['attachment']:
|
||||
name_value = None
|
||||
if property_value.get('name'):
|
||||
name_value = property_value['name']
|
||||
elif property_value.get('schema:name'):
|
||||
name_value = property_value['schema:name']
|
||||
if not name_value:
|
||||
continue
|
||||
if not propertyValue.get('type'):
|
||||
if not property_value.get('type'):
|
||||
continue
|
||||
if not propertyValue['name'].lower().startswith('briar'):
|
||||
if not name_value.lower().startswith('briar'):
|
||||
continue
|
||||
if propertyValue['type'] != 'PropertyValue':
|
||||
if not property_value['type'].endswith('PropertyValue'):
|
||||
continue
|
||||
propertyValue['value'] = briarAddress
|
||||
prop_value_name, _ = \
|
||||
get_attachment_property_value(property_value)
|
||||
if not prop_value_name:
|
||||
continue
|
||||
property_value[prop_value_name] = briar_address
|
||||
return
|
||||
|
||||
newBriarAddress = {
|
||||
new_briar_address = {
|
||||
"name": "Briar",
|
||||
"type": "PropertyValue",
|
||||
"value": briarAddress
|
||||
"value": briar_address
|
||||
}
|
||||
actorJson['attachment'].append(newBriarAddress)
|
||||
actor_json['attachment'].append(new_briar_address)
|
||||
|
|
239
cache.py
|
@ -1,7 +1,7 @@
|
|||
__filename__ = "cache.py"
|
||||
__author__ = "Bob Mottram"
|
||||
__license__ = "AGPL3+"
|
||||
__version__ = "1.2.0"
|
||||
__version__ = "1.3.0"
|
||||
__maintainer__ = "Bob Mottram"
|
||||
__email__ = "bob@libreserver.org"
|
||||
__status__ = "Production"
|
||||
|
@ -9,179 +9,194 @@ __module_group__ = "Core"
|
|||
|
||||
import os
|
||||
import datetime
|
||||
from session import urlExists
|
||||
from session import getJson
|
||||
from utils import loadJson
|
||||
from utils import saveJson
|
||||
from utils import getFileCaseInsensitive
|
||||
from utils import getUserPaths
|
||||
from session import url_exists
|
||||
from session import get_json
|
||||
from utils import load_json
|
||||
from utils import save_json
|
||||
from utils import get_file_case_insensitive
|
||||
from utils import get_user_paths
|
||||
|
||||
|
||||
def _removePersonFromCache(baseDir: str, personUrl: str,
|
||||
personCache: {}) -> bool:
|
||||
def _remove_person_from_cache(base_dir: str, person_url: str,
|
||||
person_cache: {}) -> bool:
|
||||
"""Removes an actor from the cache
|
||||
"""
|
||||
cacheFilename = baseDir + '/cache/actors/' + \
|
||||
personUrl.replace('/', '#') + '.json'
|
||||
if os.path.isfile(cacheFilename):
|
||||
cache_filename = base_dir + '/cache/actors/' + \
|
||||
person_url.replace('/', '#') + '.json'
|
||||
if os.path.isfile(cache_filename):
|
||||
try:
|
||||
os.remove(cacheFilename)
|
||||
except BaseException:
|
||||
pass
|
||||
if personCache.get(personUrl):
|
||||
del personCache[personUrl]
|
||||
os.remove(cache_filename)
|
||||
except OSError:
|
||||
print('EX: unable to delete cached actor ' + str(cache_filename))
|
||||
if person_cache.get(person_url):
|
||||
del person_cache[person_url]
|
||||
|
||||
|
||||
def checkForChangedActor(session, baseDir: str,
|
||||
httpPrefix: str, domainFull: str,
|
||||
personUrl: str, avatarUrl: str, personCache: {},
|
||||
timeoutSec: int):
|
||||
def check_for_changed_actor(session, base_dir: str,
|
||||
http_prefix: str, domain_full: str,
|
||||
person_url: str, avatar_url: str, person_cache: {},
|
||||
timeout_sec: int):
|
||||
"""Checks if the avatar url exists and if not then
|
||||
the actor has probably changed without receiving an actor/Person Update.
|
||||
So clear the actor from the cache and it will be refreshed when the next
|
||||
post from them is sent
|
||||
"""
|
||||
if not session or not avatarUrl:
|
||||
if not session or not avatar_url:
|
||||
return
|
||||
if domainFull in avatarUrl:
|
||||
if domain_full in avatar_url:
|
||||
return
|
||||
if urlExists(session, avatarUrl, timeoutSec, httpPrefix, domainFull):
|
||||
if url_exists(session, avatar_url, timeout_sec, http_prefix, domain_full):
|
||||
return
|
||||
_removePersonFromCache(baseDir, personUrl, personCache)
|
||||
_remove_person_from_cache(base_dir, person_url, person_cache)
|
||||
|
||||
|
||||
def storePersonInCache(baseDir: str, personUrl: str,
|
||||
personJson: {}, personCache: {},
|
||||
allowWriteToFile: bool) -> None:
|
||||
def store_person_in_cache(base_dir: str, person_url: str,
|
||||
person_json: {}, person_cache: {},
|
||||
allow_write_to_file: bool) -> None:
|
||||
"""Store an actor in the cache
|
||||
"""
|
||||
if 'statuses' in personUrl or personUrl.endswith('/actor'):
|
||||
if 'statuses' in person_url or person_url.endswith('/actor'):
|
||||
# This is not an actor or person account
|
||||
return
|
||||
|
||||
currTime = datetime.datetime.utcnow()
|
||||
personCache[personUrl] = {
|
||||
"actor": personJson,
|
||||
"timestamp": currTime.strftime("%Y-%m-%dT%H:%M:%SZ")
|
||||
curr_time = datetime.datetime.utcnow()
|
||||
person_cache[person_url] = {
|
||||
"actor": person_json,
|
||||
"timestamp": curr_time.strftime("%Y-%m-%dT%H:%M:%SZ")
|
||||
}
|
||||
if not baseDir:
|
||||
if not base_dir:
|
||||
return
|
||||
|
||||
# store to file
|
||||
if not allowWriteToFile:
|
||||
if not allow_write_to_file:
|
||||
return
|
||||
if os.path.isdir(baseDir + '/cache/actors'):
|
||||
cacheFilename = baseDir + '/cache/actors/' + \
|
||||
personUrl.replace('/', '#') + '.json'
|
||||
if not os.path.isfile(cacheFilename):
|
||||
saveJson(personJson, cacheFilename)
|
||||
if os.path.isdir(base_dir + '/cache/actors'):
|
||||
cache_filename = base_dir + '/cache/actors/' + \
|
||||
person_url.replace('/', '#') + '.json'
|
||||
if not os.path.isfile(cache_filename):
|
||||
save_json(person_json, cache_filename)
|
||||
|
||||
|
||||
def getPersonFromCache(baseDir: str, personUrl: str, personCache: {},
|
||||
allowWriteToFile: bool) -> {}:
|
||||
def get_person_from_cache(base_dir: str, person_url: str,
|
||||
person_cache: {}) -> {}:
|
||||
"""Get an actor from the cache
|
||||
"""
|
||||
# if the actor is not in memory then try to load it from file
|
||||
loadedFromFile = False
|
||||
if not personCache.get(personUrl):
|
||||
loaded_from_file = False
|
||||
if not person_cache.get(person_url):
|
||||
# does the person exist as a cached file?
|
||||
cacheFilename = baseDir + '/cache/actors/' + \
|
||||
personUrl.replace('/', '#') + '.json'
|
||||
actorFilename = getFileCaseInsensitive(cacheFilename)
|
||||
if actorFilename:
|
||||
personJson = loadJson(actorFilename)
|
||||
if personJson:
|
||||
storePersonInCache(baseDir, personUrl, personJson,
|
||||
personCache, False)
|
||||
loadedFromFile = True
|
||||
cache_filename = base_dir + '/cache/actors/' + \
|
||||
person_url.replace('/', '#') + '.json'
|
||||
actor_filename = get_file_case_insensitive(cache_filename)
|
||||
if actor_filename:
|
||||
person_json = load_json(actor_filename)
|
||||
if person_json:
|
||||
store_person_in_cache(base_dir, person_url, person_json,
|
||||
person_cache, False)
|
||||
loaded_from_file = True
|
||||
|
||||
if personCache.get(personUrl):
|
||||
if not loadedFromFile:
|
||||
if person_cache.get(person_url):
|
||||
if not loaded_from_file:
|
||||
# update the timestamp for the last time the actor was retrieved
|
||||
currTime = datetime.datetime.utcnow()
|
||||
currTimeStr = currTime.strftime("%Y-%m-%dT%H:%M:%SZ")
|
||||
personCache[personUrl]['timestamp'] = currTimeStr
|
||||
return personCache[personUrl]['actor']
|
||||
curr_time = datetime.datetime.utcnow()
|
||||
curr_time_str = curr_time.strftime("%Y-%m-%dT%H:%M:%SZ")
|
||||
person_cache[person_url]['timestamp'] = curr_time_str
|
||||
return person_cache[person_url]['actor']
|
||||
return None
|
||||
|
||||
|
||||
def expirePersonCache(personCache: {}):
|
||||
def expire_person_cache(person_cache: {}):
|
||||
"""Expires old entries from the cache in memory
|
||||
"""
|
||||
currTime = datetime.datetime.utcnow()
|
||||
curr_time = datetime.datetime.utcnow()
|
||||
removals = []
|
||||
for personUrl, cacheJson in personCache.items():
|
||||
cacheTime = datetime.datetime.strptime(cacheJson['timestamp'],
|
||||
"%Y-%m-%dT%H:%M:%SZ")
|
||||
daysSinceCached = (currTime - cacheTime).days
|
||||
if daysSinceCached > 2:
|
||||
removals.append(personUrl)
|
||||
for person_url, cache_json in person_cache.items():
|
||||
cache_time = datetime.datetime.strptime(cache_json['timestamp'],
|
||||
"%Y-%m-%dT%H:%M:%SZ")
|
||||
days_since_cached = (curr_time - cache_time).days
|
||||
if days_since_cached > 2:
|
||||
removals.append(person_url)
|
||||
if len(removals) > 0:
|
||||
for personUrl in removals:
|
||||
del personCache[personUrl]
|
||||
for person_url in removals:
|
||||
del person_cache[person_url]
|
||||
print(str(len(removals)) + ' actors were expired from the cache')
|
||||
|
||||
|
||||
def storeWebfingerInCache(handle: str, wf, cachedWebfingers: {}) -> None:
|
||||
def store_webfinger_in_cache(handle: str, webfing,
|
||||
cached_webfingers: {}) -> None:
|
||||
"""Store a webfinger endpoint in the cache
|
||||
"""
|
||||
cachedWebfingers[handle] = wf
|
||||
cached_webfingers[handle] = webfing
|
||||
|
||||
|
||||
def getWebfingerFromCache(handle: str, cachedWebfingers: {}) -> {}:
|
||||
def get_webfinger_from_cache(handle: str, cached_webfingers: {}) -> {}:
|
||||
"""Get webfinger endpoint from the cache
|
||||
"""
|
||||
if cachedWebfingers.get(handle):
|
||||
return cachedWebfingers[handle]
|
||||
if cached_webfingers.get(handle):
|
||||
return cached_webfingers[handle]
|
||||
return None
|
||||
|
||||
|
||||
def getPersonPubKey(baseDir: str, session, personUrl: str,
|
||||
personCache: {}, debug: bool,
|
||||
projectVersion: str, httpPrefix: str,
|
||||
domain: str, onionDomain: str,
|
||||
signingPrivateKeyPem: str) -> str:
|
||||
if not personUrl:
|
||||
def get_person_pub_key(base_dir: str, session, person_url: str,
|
||||
person_cache: {}, debug: bool,
|
||||
project_version: str, http_prefix: str,
|
||||
domain: str, onion_domain: str,
|
||||
i2p_domain: str,
|
||||
signing_priv_key_pem: str) -> str:
|
||||
"""Get the public key for an actor
|
||||
"""
|
||||
if not person_url:
|
||||
return None
|
||||
personUrl = personUrl.replace('#main-key', '')
|
||||
usersPaths = getUserPaths()
|
||||
for possibleUsersPath in usersPaths:
|
||||
if personUrl.endswith(possibleUsersPath + 'inbox'):
|
||||
if '#/publicKey' in person_url:
|
||||
person_url = person_url.replace('#/publicKey', '')
|
||||
elif '/main-key' in person_url:
|
||||
person_url = person_url.replace('/main-key', '')
|
||||
else:
|
||||
person_url = person_url.replace('#main-key', '')
|
||||
users_paths = get_user_paths()
|
||||
for possible_users_path in users_paths:
|
||||
if person_url.endswith(possible_users_path + 'inbox'):
|
||||
if debug:
|
||||
print('DEBUG: Obtaining public key for shared inbox')
|
||||
personUrl = \
|
||||
personUrl.replace(possibleUsersPath + 'inbox', '/inbox')
|
||||
person_url = \
|
||||
person_url.replace(possible_users_path + 'inbox', '/inbox')
|
||||
break
|
||||
personJson = \
|
||||
getPersonFromCache(baseDir, personUrl, personCache, True)
|
||||
if not personJson:
|
||||
person_json = \
|
||||
get_person_from_cache(base_dir, person_url, person_cache)
|
||||
if not person_json:
|
||||
if debug:
|
||||
print('DEBUG: Obtaining public key for ' + personUrl)
|
||||
personDomain = domain
|
||||
if onionDomain:
|
||||
if '.onion/' in personUrl:
|
||||
personDomain = onionDomain
|
||||
profileStr = 'https://www.w3.org/ns/activitystreams'
|
||||
asHeader = {
|
||||
'Accept': 'application/activity+json; profile="' + profileStr + '"'
|
||||
print('DEBUG: Obtaining public key for ' + person_url)
|
||||
person_domain = domain
|
||||
if onion_domain:
|
||||
if '.onion/' in person_url:
|
||||
person_domain = onion_domain
|
||||
elif i2p_domain:
|
||||
if '.i2p/' in person_url:
|
||||
person_domain = i2p_domain
|
||||
profile_str = 'https://www.w3.org/ns/activitystreams'
|
||||
accept_str = \
|
||||
'application/activity+json; profile="' + profile_str + '"'
|
||||
as_header = {
|
||||
'Accept': accept_str
|
||||
}
|
||||
personJson = \
|
||||
getJson(signingPrivateKeyPem,
|
||||
session, personUrl, asHeader, None, debug,
|
||||
projectVersion, httpPrefix, personDomain)
|
||||
if not personJson:
|
||||
person_json = \
|
||||
get_json(signing_priv_key_pem,
|
||||
session, person_url, as_header, None, debug,
|
||||
project_version, http_prefix, person_domain)
|
||||
if not person_json:
|
||||
return None
|
||||
pubKey = None
|
||||
if personJson.get('publicKey'):
|
||||
if personJson['publicKey'].get('publicKeyPem'):
|
||||
pubKey = personJson['publicKey']['publicKeyPem']
|
||||
pub_key = None
|
||||
if person_json.get('publicKey'):
|
||||
if person_json['publicKey'].get('publicKeyPem'):
|
||||
pub_key = person_json['publicKey']['publicKeyPem']
|
||||
else:
|
||||
if personJson.get('publicKeyPem'):
|
||||
pubKey = personJson['publicKeyPem']
|
||||
if person_json.get('publicKeyPem'):
|
||||
pub_key = person_json['publicKeyPem']
|
||||
|
||||
if not pubKey:
|
||||
if not pub_key:
|
||||
if debug:
|
||||
print('DEBUG: Public key not found for ' + personUrl)
|
||||
print('DEBUG: Public key not found for ' + person_url)
|
||||
|
||||
storePersonInCache(baseDir, personUrl, personJson, personCache, True)
|
||||
return pubKey
|
||||
store_person_in_cache(base_dir, person_url, person_json,
|
||||
person_cache, True)
|
||||
return pub_key
|
||||
|
|
|
@ -1,23 +1,28 @@
|
|||
# Caddy configuration file for running epicyon on example.com
|
||||
# Example configuration file for running Caddy2 in front of Epicyon
|
||||
|
||||
example.com {
|
||||
tls {
|
||||
# Valid values are rsa2048, rsa4096, rsa8192, p256, and p384.
|
||||
# Default is currently p256.
|
||||
key_type p384
|
||||
}
|
||||
header / Strict-Transport-Security "max-age=31556925"
|
||||
header / X-Content-Type-Options "nosniff"
|
||||
header / X-Download-Options "noopen"
|
||||
header / X-Frame-Options "DENY"
|
||||
header / X-Permitted-Cross-Domain-Policies "none"
|
||||
header / X-Robots-Tag "noindex"
|
||||
header / X-XSS-Protection "1; mode=block"
|
||||
YOUR_DOMAIN {
|
||||
tls USER@YOUR_DOMAIN
|
||||
|
||||
proxy / http://localhost:7156 {
|
||||
transparent
|
||||
timeout 10800s
|
||||
header {
|
||||
Strict-Transport-Security "max-age=31556925"
|
||||
Content-Security-Policy "default-src https:; script-src https: 'unsafe-inline'; style-src https: 'unsafe-inline'"
|
||||
X-Content-Type-Options "nosniff"
|
||||
X-Download-Options "noopen"
|
||||
X-Frame-Options "DENY"
|
||||
X-Permitted-Cross-Domain-Policies "none"
|
||||
X-XSS-Protection "1; mode=block"
|
||||
}
|
||||
|
||||
route /newsmirror/* {
|
||||
root * /var/www/YOUR_DOMAIN
|
||||
file_server
|
||||
}
|
||||
|
||||
route /* {
|
||||
reverse_proxy http://127.0.0.1:7156
|
||||
}
|
||||
|
||||
encode zstd gzip
|
||||
}
|
||||
|
||||
# eof
|
||||
# eof
|
210
categories.py
|
@ -1,7 +1,7 @@
|
|||
__filename__ = "categories.py"
|
||||
__author__ = "Bob Mottram"
|
||||
__license__ = "AGPL3+"
|
||||
__version__ = "1.2.0"
|
||||
__version__ = "1.3.0"
|
||||
__maintainer__ = "Bob Mottram"
|
||||
__email__ = "bob@libreserver.org"
|
||||
__status__ = "Production"
|
||||
|
@ -10,118 +10,131 @@ __module_group__ = "RSS Feeds"
|
|||
import os
|
||||
import datetime
|
||||
|
||||
MAX_TAG_LENGTH = 42
|
||||
|
||||
def getHashtagCategory(baseDir: str, hashtag: str) -> str:
|
||||
INVALID_HASHTAG_CHARS = (',', ' ', '<', ';', '\\', '"', '&', '#')
|
||||
|
||||
|
||||
def get_hashtag_category(base_dir: str, hashtag: str) -> str:
|
||||
"""Returns the category for the hashtag
|
||||
"""
|
||||
categoryFilename = baseDir + '/tags/' + hashtag + '.category'
|
||||
if not os.path.isfile(categoryFilename):
|
||||
categoryFilename = baseDir + '/tags/' + hashtag.title() + '.category'
|
||||
if not os.path.isfile(categoryFilename):
|
||||
categoryFilename = \
|
||||
baseDir + '/tags/' + hashtag.upper() + '.category'
|
||||
if not os.path.isfile(categoryFilename):
|
||||
category_filename = base_dir + '/tags/' + hashtag + '.category'
|
||||
if not os.path.isfile(category_filename):
|
||||
category_filename = base_dir + '/tags/' + hashtag.title() + '.category'
|
||||
if not os.path.isfile(category_filename):
|
||||
category_filename = \
|
||||
base_dir + '/tags/' + hashtag.upper() + '.category'
|
||||
if not os.path.isfile(category_filename):
|
||||
return ''
|
||||
|
||||
with open(categoryFilename, 'r') as fp:
|
||||
categoryStr = fp.read()
|
||||
if categoryStr:
|
||||
return categoryStr
|
||||
category_str = None
|
||||
try:
|
||||
with open(category_filename, 'r', encoding='utf-8') as category_file:
|
||||
category_str = category_file.read()
|
||||
except OSError:
|
||||
print('EX: unable to read category ' + category_filename)
|
||||
if category_str:
|
||||
return category_str
|
||||
return ''
|
||||
|
||||
|
||||
def getHashtagCategories(baseDir: str,
|
||||
recent: bool = False, category: str = None) -> None:
|
||||
def get_hashtag_categories(base_dir: str,
|
||||
recent: bool = False,
|
||||
category: str = None) -> None:
|
||||
"""Returns a dictionary containing hashtag categories
|
||||
"""
|
||||
maxTagLength = 42
|
||||
hashtagCategories = {}
|
||||
hashtag_categories = {}
|
||||
|
||||
if recent:
|
||||
currTime = datetime.datetime.utcnow()
|
||||
daysSinceEpoch = (currTime - datetime.datetime(1970, 1, 1)).days
|
||||
recently = daysSinceEpoch - 1
|
||||
curr_time = datetime.datetime.utcnow()
|
||||
days_since_epoch = (curr_time - datetime.datetime(1970, 1, 1)).days
|
||||
recently = days_since_epoch - 1
|
||||
|
||||
for subdir, dirs, files in os.walk(baseDir + '/tags'):
|
||||
for f in files:
|
||||
if not f.endswith('.category'):
|
||||
for _, _, files in os.walk(base_dir + '/tags'):
|
||||
for catfile in files:
|
||||
if not catfile.endswith('.category'):
|
||||
continue
|
||||
categoryFilename = os.path.join(baseDir + '/tags', f)
|
||||
if not os.path.isfile(categoryFilename):
|
||||
category_filename = os.path.join(base_dir + '/tags', catfile)
|
||||
if not os.path.isfile(category_filename):
|
||||
continue
|
||||
hashtag = f.split('.')[0]
|
||||
if len(hashtag) > maxTagLength:
|
||||
hashtag = catfile.split('.')[0]
|
||||
if len(hashtag) > MAX_TAG_LENGTH:
|
||||
continue
|
||||
with open(categoryFilename, 'r') as fp:
|
||||
categoryStr = fp.read()
|
||||
with open(category_filename, 'r', encoding='utf-8') as fp_category:
|
||||
category_str = fp_category.read()
|
||||
|
||||
if not categoryStr:
|
||||
if not category_str:
|
||||
continue
|
||||
|
||||
if category:
|
||||
# only return a dictionary for a specific category
|
||||
if categoryStr != category:
|
||||
if category_str != category:
|
||||
continue
|
||||
|
||||
if recent:
|
||||
tagsFilename = baseDir + '/tags/' + hashtag + '.txt'
|
||||
if not os.path.isfile(tagsFilename):
|
||||
tags_filename = base_dir + '/tags/' + hashtag + '.txt'
|
||||
if not os.path.isfile(tags_filename):
|
||||
continue
|
||||
modTimesinceEpoc = \
|
||||
os.path.getmtime(tagsFilename)
|
||||
lastModifiedDate = \
|
||||
datetime.datetime.fromtimestamp(modTimesinceEpoc)
|
||||
fileDaysSinceEpoch = \
|
||||
(lastModifiedDate -
|
||||
mod_time_since_epoc = \
|
||||
os.path.getmtime(tags_filename)
|
||||
last_modified_date = \
|
||||
datetime.datetime.fromtimestamp(mod_time_since_epoc)
|
||||
file_days_since_epoch = \
|
||||
(last_modified_date -
|
||||
datetime.datetime(1970, 1, 1)).days
|
||||
if fileDaysSinceEpoch < recently:
|
||||
if file_days_since_epoch < recently:
|
||||
continue
|
||||
|
||||
if not hashtagCategories.get(categoryStr):
|
||||
hashtagCategories[categoryStr] = [hashtag]
|
||||
if not hashtag_categories.get(category_str):
|
||||
hashtag_categories[category_str] = [hashtag]
|
||||
else:
|
||||
if hashtag not in hashtagCategories[categoryStr]:
|
||||
hashtagCategories[categoryStr].append(hashtag)
|
||||
if hashtag not in hashtag_categories[category_str]:
|
||||
hashtag_categories[category_str].append(hashtag)
|
||||
break
|
||||
return hashtagCategories
|
||||
return hashtag_categories
|
||||
|
||||
|
||||
def updateHashtagCategories(baseDir: str) -> None:
|
||||
def update_hashtag_categories(base_dir: str) -> None:
|
||||
"""Regenerates the list of hashtag categories
|
||||
"""
|
||||
categoryListFilename = baseDir + '/accounts/categoryList.txt'
|
||||
hashtagCategories = getHashtagCategories(baseDir)
|
||||
if not hashtagCategories:
|
||||
if os.path.isfile(categoryListFilename):
|
||||
category_list_filename = base_dir + '/accounts/categoryList.txt'
|
||||
hashtag_categories = get_hashtag_categories(base_dir)
|
||||
if not hashtag_categories:
|
||||
if os.path.isfile(category_list_filename):
|
||||
try:
|
||||
os.remove(categoryListFilename)
|
||||
except BaseException:
|
||||
pass
|
||||
os.remove(category_list_filename)
|
||||
except OSError:
|
||||
print('EX: update_hashtag_categories ' +
|
||||
'unable to delete cached category list ' +
|
||||
category_list_filename)
|
||||
return
|
||||
|
||||
categoryList = []
|
||||
for categoryStr, hashtagList in hashtagCategories.items():
|
||||
categoryList.append(categoryStr)
|
||||
categoryList.sort()
|
||||
category_list = []
|
||||
for category_str, _ in hashtag_categories.items():
|
||||
category_list.append(category_str)
|
||||
category_list.sort()
|
||||
|
||||
categoryListStr = ''
|
||||
for categoryStr in categoryList:
|
||||
categoryListStr += categoryStr + '\n'
|
||||
category_list_str = ''
|
||||
for category_str in category_list:
|
||||
category_list_str += category_str + '\n'
|
||||
|
||||
# save a list of available categories for quick lookup
|
||||
with open(categoryListFilename, 'w+') as fp:
|
||||
fp.write(categoryListStr)
|
||||
try:
|
||||
with open(category_list_filename, 'w+',
|
||||
encoding='utf-8') as fp_category:
|
||||
fp_category.write(category_list_str)
|
||||
except OSError:
|
||||
print('EX: unable to write category ' + category_list_filename)
|
||||
|
||||
|
||||
def _validHashtagCategory(category: str) -> bool:
|
||||
def _valid_hashtag_category(category: str) -> bool:
|
||||
"""Returns true if the category name is valid
|
||||
"""
|
||||
if not category:
|
||||
return False
|
||||
|
||||
invalidChars = (',', ' ', '<', ';', '\\', '"', '&', '#')
|
||||
for ch in invalidChars:
|
||||
if ch in category:
|
||||
for char in INVALID_HASHTAG_CHARS:
|
||||
if char in category:
|
||||
return False
|
||||
|
||||
# too long
|
||||
|
@ -131,52 +144,61 @@ def _validHashtagCategory(category: str) -> bool:
|
|||
return True
|
||||
|
||||
|
||||
def setHashtagCategory(baseDir: str, hashtag: str, category: str,
|
||||
update: bool, force: bool = False) -> bool:
|
||||
def set_hashtag_category(base_dir: str, hashtag: str, category: str,
|
||||
update: bool, force: bool = False) -> bool:
|
||||
"""Sets the category for the hashtag
|
||||
"""
|
||||
if not _validHashtagCategory(category):
|
||||
if not _valid_hashtag_category(category):
|
||||
return False
|
||||
|
||||
if not force:
|
||||
hashtagFilename = baseDir + '/tags/' + hashtag + '.txt'
|
||||
if not os.path.isfile(hashtagFilename):
|
||||
hashtag_filename = base_dir + '/tags/' + hashtag + '.txt'
|
||||
if not os.path.isfile(hashtag_filename):
|
||||
hashtag = hashtag.title()
|
||||
hashtagFilename = baseDir + '/tags/' + hashtag + '.txt'
|
||||
if not os.path.isfile(hashtagFilename):
|
||||
hashtag_filename = base_dir + '/tags/' + hashtag + '.txt'
|
||||
if not os.path.isfile(hashtag_filename):
|
||||
hashtag = hashtag.upper()
|
||||
hashtagFilename = baseDir + '/tags/' + hashtag + '.txt'
|
||||
if not os.path.isfile(hashtagFilename):
|
||||
hashtag_filename = base_dir + '/tags/' + hashtag + '.txt'
|
||||
if not os.path.isfile(hashtag_filename):
|
||||
return False
|
||||
|
||||
if not os.path.isdir(baseDir + '/tags'):
|
||||
os.mkdir(baseDir + '/tags')
|
||||
categoryFilename = baseDir + '/tags/' + hashtag + '.category'
|
||||
if not os.path.isdir(base_dir + '/tags'):
|
||||
os.mkdir(base_dir + '/tags')
|
||||
category_filename = base_dir + '/tags/' + hashtag + '.category'
|
||||
if force:
|
||||
# don't overwrite any existing categories
|
||||
if os.path.isfile(categoryFilename):
|
||||
if os.path.isfile(category_filename):
|
||||
return False
|
||||
with open(categoryFilename, 'w+') as fp:
|
||||
fp.write(category)
|
||||
|
||||
category_written = False
|
||||
try:
|
||||
with open(category_filename, 'w+', encoding='utf-8') as fp_category:
|
||||
fp_category.write(category)
|
||||
category_written = True
|
||||
except OSError as ex:
|
||||
print('EX: unable to write category ' + category_filename +
|
||||
' ' + str(ex))
|
||||
|
||||
if category_written:
|
||||
if update:
|
||||
updateHashtagCategories(baseDir)
|
||||
update_hashtag_categories(base_dir)
|
||||
return True
|
||||
|
||||
return False
|
||||
|
||||
|
||||
def guessHashtagCategory(tagName: str, hashtagCategories: {}) -> str:
|
||||
def guess_hashtag_category(tagName: str, hashtag_categories: {}) -> str:
|
||||
"""Tries to guess a category for the given hashtag.
|
||||
This works by trying to find the longest similar hashtag
|
||||
"""
|
||||
if len(tagName) < 4:
|
||||
return ''
|
||||
|
||||
categoryMatched = ''
|
||||
tagMatchedLen = 0
|
||||
category_matched = ''
|
||||
tag_matched_len = 0
|
||||
|
||||
for categoryStr, hashtagList in hashtagCategories.items():
|
||||
for hashtag in hashtagList:
|
||||
for category_str, hashtag_list in hashtag_categories.items():
|
||||
for hashtag in hashtag_list:
|
||||
if len(hashtag) < 4:
|
||||
# avoid matching very small strings which often
|
||||
# lead to spurious categories
|
||||
|
@ -184,13 +206,13 @@ def guessHashtagCategory(tagName: str, hashtagCategories: {}) -> str:
|
|||
if hashtag not in tagName:
|
||||
if tagName not in hashtag:
|
||||
continue
|
||||
if not categoryMatched:
|
||||
tagMatchedLen = len(hashtag)
|
||||
categoryMatched = categoryStr
|
||||
if not category_matched:
|
||||
tag_matched_len = len(hashtag)
|
||||
category_matched = category_str
|
||||
else:
|
||||
# match the longest tag
|
||||
if len(hashtag) > tagMatchedLen:
|
||||
categoryMatched = categoryStr
|
||||
if not categoryMatched:
|
||||
if len(hashtag) > tag_matched_len:
|
||||
category_matched = category_str
|
||||
if not category_matched:
|
||||
return ''
|
||||
return categoryMatched
|
||||
return category_matched
|
||||
|
|
303
city.py
|
@ -1,7 +1,7 @@
|
|||
__filename__ = "city.py"
|
||||
__author__ = "Bob Mottram"
|
||||
__license__ = "AGPL3+"
|
||||
__version__ = "1.2.0"
|
||||
__version__ = "1.3.0"
|
||||
__maintainer__ = "Bob Mottram"
|
||||
__email__ = "bob@libreserver.org"
|
||||
__status__ = "Production"
|
||||
|
@ -12,7 +12,8 @@ import datetime
|
|||
import random
|
||||
import math
|
||||
from random import randint
|
||||
from utils import acctDir
|
||||
from utils import acct_dir
|
||||
from utils import remove_eol
|
||||
|
||||
# states which the simulated city dweller can be in
|
||||
PERSON_SLEEP = 0
|
||||
|
@ -22,8 +23,10 @@ PERSON_SHOP = 3
|
|||
PERSON_EVENING = 4
|
||||
PERSON_PARTY = 5
|
||||
|
||||
BUSY_STATES = (PERSON_WORK, PERSON_SHOP, PERSON_PLAY, PERSON_PARTY)
|
||||
|
||||
def _getDecoyCamera(decoySeed: int) -> (str, str, int):
|
||||
|
||||
def _get_decoy_camera(decoy_seed: int) -> (str, str, int):
|
||||
"""Returns a decoy camera make and model which took the photo
|
||||
"""
|
||||
cameras = [
|
||||
|
@ -37,10 +40,16 @@ def _getDecoyCamera(decoySeed: int) -> (str, str, int):
|
|||
["Apple", "iPhone 12"],
|
||||
["Apple", "iPhone 12 Mini"],
|
||||
["Apple", "iPhone 12 Pro Max"],
|
||||
["Apple", "iPhone 13"],
|
||||
["Apple", "iPhone 13 Mini"],
|
||||
["Apple", "iPhone 13 Pro"],
|
||||
["Samsung", "Galaxy Note 20 Ultra"],
|
||||
["Samsung", "Galaxy S20 Plus"],
|
||||
["Samsung", "Galaxy S20 FE 5G"],
|
||||
["Samsung", "Galaxy Z FOLD 2"],
|
||||
["Samsung", "Galaxy S12 Plus"],
|
||||
["Samsung", "Galaxy S12"],
|
||||
["Samsung", "Galaxy S11 Plus"],
|
||||
["Samsung", "Galaxy S10 Plus"],
|
||||
["Samsung", "Galaxy S10e"],
|
||||
["Samsung", "Galaxy Z Flip"],
|
||||
|
@ -50,8 +59,13 @@ def _getDecoyCamera(decoySeed: int) -> (str, str, int):
|
|||
["Samsung", "Galaxy S10e"],
|
||||
["Samsung", "Galaxy S10 5G"],
|
||||
["Samsung", "Galaxy A60"],
|
||||
["Samsung", "Note 12"],
|
||||
["Samsung", "Note 12 Plus"],
|
||||
["Samsung", "Note 11"],
|
||||
["Samsung", "Note 11 Plus"],
|
||||
["Samsung", "Note 10"],
|
||||
["Samsung", "Note 10 Plus"],
|
||||
["Samsung", "Galaxy S22 Ultra"],
|
||||
["Samsung", "Galaxy S21 Ultra"],
|
||||
["Samsung", "Galaxy Note 20 Ultra"],
|
||||
["Samsung", "Galaxy S21"],
|
||||
|
@ -60,6 +74,8 @@ def _getDecoyCamera(decoySeed: int) -> (str, str, int):
|
|||
["Samsung", "Galaxy Z Fold 2"],
|
||||
["Samsung", "Galaxy A52 5G"],
|
||||
["Samsung", "Galaxy A71 5G"],
|
||||
["Google", "Pixel 6 Pro"],
|
||||
["Google", "Pixel 6"],
|
||||
["Google", "Pixel 5"],
|
||||
["Google", "Pixel 4a"],
|
||||
["Google", "Pixel 4 XL"],
|
||||
|
@ -69,13 +85,13 @@ def _getDecoyCamera(decoySeed: int) -> (str, str, int):
|
|||
["Google", "Pixel 3"],
|
||||
["Google", "Pixel 3a"]
|
||||
]
|
||||
randgen = random.Random(decoySeed)
|
||||
randgen = random.Random(decoy_seed)
|
||||
index = randgen.randint(0, len(cameras) - 1)
|
||||
serialNumber = randgen.randint(100000000000, 999999999999999999999999)
|
||||
return cameras[index][0], cameras[index][1], serialNumber
|
||||
serial_number = randgen.randint(100000000000, 999999999999999999999999)
|
||||
return cameras[index][0], cameras[index][1], serial_number
|
||||
|
||||
|
||||
def _getCityPulse(currTimeOfDay, decoySeed: int) -> (float, float):
|
||||
def _get_city_pulse(curr_time_of_day, decoy_seed: int) -> (float, float):
|
||||
"""This simulates expected average patterns of movement in a city.
|
||||
Jane or Joe average lives and works in the city, commuting in
|
||||
and out of the central district for work. They have a unique
|
||||
|
@ -84,143 +100,149 @@ def _getCityPulse(currTimeOfDay, decoySeed: int) -> (float, float):
|
|||
Distance from the city centre is in the range 0.0 - 1.0
|
||||
Angle is in radians
|
||||
"""
|
||||
randgen = random.Random(decoySeed)
|
||||
randgen = random.Random(decoy_seed)
|
||||
variance = 3
|
||||
busyStates = (PERSON_WORK, PERSON_SHOP, PERSON_PLAY, PERSON_PARTY)
|
||||
dataDecoyState = PERSON_SLEEP
|
||||
weekday = currTimeOfDay.weekday()
|
||||
minHour = 7 + randint(0, variance)
|
||||
maxHour = 17 + randint(0, variance)
|
||||
if currTimeOfDay.hour > minHour:
|
||||
if currTimeOfDay.hour <= maxHour:
|
||||
data_decoy_state = PERSON_SLEEP
|
||||
weekday = curr_time_of_day.weekday()
|
||||
min_hour = 7 + randint(0, variance)
|
||||
max_hour = 17 + randint(0, variance)
|
||||
if curr_time_of_day.hour > min_hour:
|
||||
if curr_time_of_day.hour <= max_hour:
|
||||
if weekday < 5:
|
||||
dataDecoyState = PERSON_WORK
|
||||
data_decoy_state = PERSON_WORK
|
||||
elif weekday == 5:
|
||||
dataDecoyState = PERSON_SHOP
|
||||
data_decoy_state = PERSON_SHOP
|
||||
else:
|
||||
dataDecoyState = PERSON_PLAY
|
||||
data_decoy_state = PERSON_PLAY
|
||||
else:
|
||||
if weekday < 5:
|
||||
dataDecoyState = PERSON_EVENING
|
||||
data_decoy_state = PERSON_EVENING
|
||||
else:
|
||||
dataDecoyState = PERSON_PARTY
|
||||
randgen2 = random.Random(decoySeed + dataDecoyState)
|
||||
angleRadians = \
|
||||
data_decoy_state = PERSON_PARTY
|
||||
randgen2 = random.Random(decoy_seed + data_decoy_state)
|
||||
angle_radians = \
|
||||
(randgen2.randint(0, 100000) / 100000) * 2 * math.pi
|
||||
# some people are quite random, others have more predictable habits
|
||||
decoyRandomness = randgen.randint(1, 3)
|
||||
decoy_randomness = randgen.randint(1, 3)
|
||||
# occasionally throw in a wildcard to keep the machine learning guessing
|
||||
if randint(0, 100) < decoyRandomness:
|
||||
distanceFromCityCenter = (randint(0, 100000) / 100000)
|
||||
angleRadians = (randint(0, 100000) / 100000) * 2 * math.pi
|
||||
if randint(0, 100) < decoy_randomness:
|
||||
distance_from_city_center = (randint(0, 100000) / 100000)
|
||||
angle_radians = (randint(0, 100000) / 100000) * 2 * math.pi
|
||||
else:
|
||||
# what consitutes the central district is fuzzy
|
||||
centralDistrictFuzz = (randgen.randint(0, 100000) / 100000) * 0.1
|
||||
busyRadius = 0.3 + centralDistrictFuzz
|
||||
if dataDecoyState in busyStates:
|
||||
central_district_fuzz = (randgen.randint(0, 100000) / 100000) * 0.1
|
||||
busy_radius = 0.3 + central_district_fuzz
|
||||
if data_decoy_state in BUSY_STATES:
|
||||
# if we are busy then we're somewhere in the city center
|
||||
distanceFromCityCenter = \
|
||||
(randgen.randint(0, 100000) / 100000) * busyRadius
|
||||
distance_from_city_center = \
|
||||
(randgen.randint(0, 100000) / 100000) * busy_radius
|
||||
else:
|
||||
# otherwise we're in the burbs
|
||||
distanceFromCityCenter = busyRadius + \
|
||||
((1.0 - busyRadius) * (randgen.randint(0, 100000) / 100000))
|
||||
return distanceFromCityCenter, angleRadians
|
||||
distance_from_city_center = busy_radius + \
|
||||
((1.0 - busy_radius) * (randgen.randint(0, 100000) / 100000))
|
||||
return distance_from_city_center, angle_radians
|
||||
|
||||
|
||||
def parseNogoString(nogoLine: str) -> []:
|
||||
def parse_nogo_string(nogo_line: str) -> []:
|
||||
"""Parses a line from locations_nogo.txt and returns the polygon
|
||||
"""
|
||||
nogoLine = nogoLine.replace('\n', '').replace('\r', '')
|
||||
polygonStr = nogoLine.split(':', 1)[1]
|
||||
if ';' in polygonStr:
|
||||
pts = polygonStr.split(';')
|
||||
nogo_line = remove_eol(nogo_line)
|
||||
polygon_str = nogo_line.split(':', 1)[1]
|
||||
if ';' in polygon_str:
|
||||
pts = polygon_str.split(';')
|
||||
else:
|
||||
pts = polygonStr.split(',')
|
||||
pts = polygon_str.split(',')
|
||||
if len(pts) <= 4:
|
||||
return []
|
||||
polygon = []
|
||||
for index in range(int(len(pts)/2)):
|
||||
if index*2 + 1 >= len(pts):
|
||||
break
|
||||
longitudeStr = pts[index*2].strip()
|
||||
latitudeStr = pts[index*2 + 1].strip()
|
||||
if 'E' in latitudeStr or 'W' in latitudeStr:
|
||||
longitudeStr = pts[index*2 + 1].strip()
|
||||
latitudeStr = pts[index*2].strip()
|
||||
if 'E' in longitudeStr:
|
||||
longitudeStr = \
|
||||
longitudeStr.replace('E', '')
|
||||
longitude = float(longitudeStr)
|
||||
elif 'W' in longitudeStr:
|
||||
longitudeStr = \
|
||||
longitudeStr.replace('W', '')
|
||||
longitude = -float(longitudeStr)
|
||||
longitude_str = pts[index*2].strip()
|
||||
latitude_str = pts[index*2 + 1].strip()
|
||||
if 'E' in latitude_str or 'W' in latitude_str:
|
||||
longitude_str = pts[index*2 + 1].strip()
|
||||
latitude_str = pts[index*2].strip()
|
||||
if 'E' in longitude_str:
|
||||
longitude_str = \
|
||||
longitude_str.replace('E', '')
|
||||
longitude = float(longitude_str)
|
||||
elif 'W' in longitude_str:
|
||||
longitude_str = \
|
||||
longitude_str.replace('W', '')
|
||||
longitude = -float(longitude_str)
|
||||
else:
|
||||
longitude = float(longitudeStr)
|
||||
latitude = float(latitudeStr)
|
||||
longitude = float(longitude_str)
|
||||
latitude = float(latitude_str)
|
||||
polygon.append([latitude, longitude])
|
||||
return polygon
|
||||
|
||||
|
||||
def spoofGeolocation(baseDir: str,
|
||||
city: str, currTime, decoySeed: int,
|
||||
citiesList: [],
|
||||
nogoList: []) -> (float, float, str, str,
|
||||
str, str, int):
|
||||
def spoof_geolocation(base_dir: str,
|
||||
city: str, curr_time, decoy_seed: int,
|
||||
cities_list: [],
|
||||
nogo_list: []) -> (float, float, str, str,
|
||||
str, str, int):
|
||||
"""Given a city and the current time spoofs the location
|
||||
for an image
|
||||
returns latitude, longitude, N/S, E/W,
|
||||
camera make, camera model, camera serial number
|
||||
"""
|
||||
locationsFilename = baseDir + '/custom_locations.txt'
|
||||
if not os.path.isfile(locationsFilename):
|
||||
locationsFilename = baseDir + '/locations.txt'
|
||||
locations_filename = base_dir + '/custom_locations.txt'
|
||||
if not os.path.isfile(locations_filename):
|
||||
locations_filename = base_dir + '/locations.txt'
|
||||
|
||||
nogoFilename = baseDir + '/custom_locations_nogo.txt'
|
||||
if not os.path.isfile(nogoFilename):
|
||||
nogoFilename = baseDir + '/locations_nogo.txt'
|
||||
nogo_filename = base_dir + '/custom_locations_nogo.txt'
|
||||
if not os.path.isfile(nogo_filename):
|
||||
nogo_filename = base_dir + '/locations_nogo.txt'
|
||||
|
||||
manCityRadius = 0.1
|
||||
varianceAtLocation = 0.0004
|
||||
man_city_radius = 0.1
|
||||
variance_at_location = 0.0004
|
||||
default_latitude = 51.8744
|
||||
default_longitude = 0.368333
|
||||
default_latdirection = 'N'
|
||||
default_longdirection = 'W'
|
||||
|
||||
if citiesList:
|
||||
cities = citiesList
|
||||
if cities_list:
|
||||
cities = cities_list
|
||||
else:
|
||||
if not os.path.isfile(locationsFilename):
|
||||
if not os.path.isfile(locations_filename):
|
||||
return (default_latitude, default_longitude,
|
||||
default_latdirection, default_longdirection,
|
||||
"", "", 0)
|
||||
cities = []
|
||||
with open(locationsFilename, 'r') as f:
|
||||
cities = f.readlines()
|
||||
try:
|
||||
with open(locations_filename, 'r', encoding='utf-8') as loc_file:
|
||||
cities = loc_file.readlines()
|
||||
except OSError:
|
||||
print('EX: unable to read locations ' + locations_filename)
|
||||
|
||||
nogo = []
|
||||
if nogoList:
|
||||
nogo = nogoList
|
||||
if nogo_list:
|
||||
nogo = nogo_list
|
||||
else:
|
||||
if os.path.isfile(nogoFilename):
|
||||
with open(nogoFilename, 'r') as f:
|
||||
nogoList = f.readlines()
|
||||
for line in nogoList:
|
||||
if line.startswith(city + ':'):
|
||||
polygon = parseNogoString(line)
|
||||
if polygon:
|
||||
nogo.append(polygon)
|
||||
if os.path.isfile(nogo_filename):
|
||||
nogo_list = []
|
||||
try:
|
||||
with open(nogo_filename, 'r', encoding='utf-8') as nogo_file:
|
||||
nogo_list = nogo_file.readlines()
|
||||
except OSError:
|
||||
print('EX: unable to read ' + nogo_filename)
|
||||
for line in nogo_list:
|
||||
if line.startswith(city + ':'):
|
||||
polygon = parse_nogo_string(line)
|
||||
if polygon:
|
||||
nogo.append(polygon)
|
||||
|
||||
city = city.lower()
|
||||
for cityName in cities:
|
||||
if city in cityName.lower():
|
||||
cityFields = cityName.split(':')
|
||||
latitude = cityFields[1]
|
||||
longitude = cityFields[2]
|
||||
areaKm2 = 0
|
||||
if len(cityFields) > 3:
|
||||
areaKm2 = int(cityFields[3])
|
||||
for city_name in cities:
|
||||
if city in city_name.lower():
|
||||
city_fields = city_name.split(':')
|
||||
latitude = city_fields[1]
|
||||
longitude = city_fields[2]
|
||||
area_km2 = 0
|
||||
if len(city_fields) > 3:
|
||||
area_km2 = int(city_fields[3])
|
||||
latdirection = 'N'
|
||||
longdirection = 'E'
|
||||
if 'S' in latitude:
|
||||
|
@ -232,99 +254,108 @@ def spoofGeolocation(baseDir: str,
|
|||
latitude = float(latitude)
|
||||
longitude = float(longitude)
|
||||
# get the time of day at the city
|
||||
approxTimeZone = int(longitude / 15.0)
|
||||
approx_time_zone = int(longitude / 15.0)
|
||||
if longdirection == 'E':
|
||||
approxTimeZone = -approxTimeZone
|
||||
currTimeAdjusted = currTime - \
|
||||
datetime.timedelta(hours=approxTimeZone)
|
||||
camMake, camModel, camSerialNumber = \
|
||||
_getDecoyCamera(decoySeed)
|
||||
validCoord = False
|
||||
seedOffset = 0
|
||||
while not validCoord:
|
||||
approx_time_zone = -approx_time_zone
|
||||
curr_time_adjusted = curr_time - \
|
||||
datetime.timedelta(hours=approx_time_zone)
|
||||
cam_make, cam_model, cam_serial_number = \
|
||||
_get_decoy_camera(decoy_seed)
|
||||
valid_coord = False
|
||||
seed_offset = 0
|
||||
while not valid_coord:
|
||||
# patterns of activity change in the city over time
|
||||
(distanceFromCityCenter, angleRadians) = \
|
||||
_getCityPulse(currTimeAdjusted, decoySeed + seedOffset)
|
||||
(distance_from_city_center, angle_radians) = \
|
||||
_get_city_pulse(curr_time_adjusted,
|
||||
decoy_seed + seed_offset)
|
||||
# The city radius value is in longitude and the reference
|
||||
# is Manchester. Adjust for the radius of the chosen city.
|
||||
if areaKm2 > 1:
|
||||
manRadius = math.sqrt(1276 / math.pi)
|
||||
radius = math.sqrt(areaKm2 / math.pi)
|
||||
cityRadiusDeg = (radius / manRadius) * manCityRadius
|
||||
if area_km2 > 1:
|
||||
man_radius = math.sqrt(1276 / math.pi)
|
||||
radius = math.sqrt(area_km2 / math.pi)
|
||||
city_radius_deg = (radius / man_radius) * man_city_radius
|
||||
else:
|
||||
cityRadiusDeg = manCityRadius
|
||||
city_radius_deg = man_city_radius
|
||||
# Get the position within the city, with some randomness added
|
||||
latitude += \
|
||||
distanceFromCityCenter * cityRadiusDeg * \
|
||||
math.cos(angleRadians)
|
||||
distance_from_city_center * city_radius_deg * \
|
||||
math.cos(angle_radians)
|
||||
longitude += \
|
||||
distanceFromCityCenter * cityRadiusDeg * \
|
||||
math.sin(angleRadians)
|
||||
distance_from_city_center * city_radius_deg * \
|
||||
math.sin(angle_radians)
|
||||
longval = longitude
|
||||
if longdirection == 'W':
|
||||
longval = -longitude
|
||||
validCoord = not pointInNogo(nogo, latitude, longval)
|
||||
if not validCoord:
|
||||
seedOffset += 1
|
||||
if seedOffset > 100:
|
||||
valid_coord = not point_in_nogo(nogo, latitude, longval)
|
||||
if not valid_coord:
|
||||
seed_offset += 1
|
||||
if seed_offset > 100:
|
||||
break
|
||||
# add a small amount of variance around the location
|
||||
fraction = randint(0, 100000) / 100000
|
||||
distanceFromLocation = fraction * fraction * varianceAtLocation
|
||||
distance_from_location = fraction * fraction * variance_at_location
|
||||
fraction = randint(0, 100000) / 100000
|
||||
angleFromLocation = fraction * 2 * math.pi
|
||||
latitude += distanceFromLocation * math.cos(angleFromLocation)
|
||||
longitude += distanceFromLocation * math.sin(angleFromLocation)
|
||||
angle_from_location = fraction * 2 * math.pi
|
||||
latitude += distance_from_location * math.cos(angle_from_location)
|
||||
longitude += distance_from_location * math.sin(angle_from_location)
|
||||
|
||||
# gps locations aren't transcendental, so round to a fixed
|
||||
# number of decimal places
|
||||
latitude = int(latitude * 100000) / 100000.0
|
||||
longitude = int(longitude * 100000) / 100000.0
|
||||
return (latitude, longitude, latdirection, longdirection,
|
||||
camMake, camModel, camSerialNumber)
|
||||
cam_make, cam_model, cam_serial_number)
|
||||
|
||||
return (default_latitude, default_longitude,
|
||||
default_latdirection, default_longdirection,
|
||||
"", "", 0)
|
||||
|
||||
|
||||
def getSpoofedCity(city: str, baseDir: str, nickname: str, domain: str) -> str:
|
||||
def get_spoofed_city(city: str, base_dir: str,
|
||||
nickname: str, domain: str) -> str:
|
||||
"""Returns the name of the city to use as a GPS spoofing location for
|
||||
image metadata
|
||||
"""
|
||||
city = ''
|
||||
cityFilename = acctDir(baseDir, nickname, domain) + '/city.txt'
|
||||
if os.path.isfile(cityFilename):
|
||||
with open(cityFilename, 'r') as fp:
|
||||
city = fp.read().replace('\n', '')
|
||||
city_filename = acct_dir(base_dir, nickname, domain) + '/city.txt'
|
||||
if os.path.isfile(city_filename):
|
||||
try:
|
||||
with open(city_filename, 'r', encoding='utf-8') as city_file:
|
||||
city1 = city_file.read()
|
||||
city = remove_eol(city1)
|
||||
except OSError:
|
||||
print('EX: unable to read ' + city_filename)
|
||||
return city
|
||||
|
||||
|
||||
def _pointInPolygon(poly: [], x: float, y: float) -> bool:
|
||||
def _point_in_polygon(poly: [], x_coord: float, y_coord: float) -> bool:
|
||||
"""Returns true if the given point is inside the given polygon
|
||||
"""
|
||||
n = len(poly)
|
||||
num = len(poly)
|
||||
inside = False
|
||||
p2x = 0.0
|
||||
p2y = 0.0
|
||||
xints = 0.0
|
||||
p1x, p1y = poly[0]
|
||||
for i in range(n + 1):
|
||||
p2x, p2y = poly[i % n]
|
||||
if y > min(p1y, p2y):
|
||||
if y <= max(p1y, p2y):
|
||||
if x <= max(p1x, p2x):
|
||||
for i in range(num + 1):
|
||||
p2x, p2y = poly[i % num]
|
||||
if y_coord > min(p1y, p2y):
|
||||
if y_coord <= max(p1y, p2y):
|
||||
if x_coord <= max(p1x, p2x):
|
||||
if p1y != p2y:
|
||||
xints = (y - p1y) * (p2x - p1x) / (p2y - p1y) + p1x
|
||||
if p1x == p2x or x <= xints:
|
||||
xints = \
|
||||
(y_coord - p1y) * (p2x - p1x) / (p2y - p1y) + p1x
|
||||
if p1x == p2x or x_coord <= xints:
|
||||
inside = not inside
|
||||
p1x, p1y = p2x, p2y
|
||||
|
||||
return inside
|
||||
|
||||
|
||||
def pointInNogo(nogo: [], latitude: float, longitude: float) -> bool:
|
||||
def point_in_nogo(nogo: [], latitude: float, longitude: float) -> bool:
|
||||
"""Returns true of the given geolocation is within a nogo area
|
||||
"""
|
||||
for polygon in nogo:
|
||||
if _pointInPolygon(polygon, latitude, longitude):
|
||||
if _point_in_polygon(polygon, latitude, longitude):
|
||||
return True
|
||||
return False
|
||||
|
|
|
@ -38,7 +38,7 @@ No insults, harassment (sexual or otherwise), condescension, ad hominem, threats
|
|||
|
||||
Condescension means treating others as inferior. Subtle condescension still violates the Code of Conduct even if not blatantly demeaning.
|
||||
|
||||
No stereotyping of or promoting prejudice or discrimination against particular groups or classes/castes of people, including sexism, racism, homophobia, transphobia, age discrimination or discrimination based upon nationality.
|
||||
No stereotyping of or promoting prejudice or discrimination against particular groups or classes/castes of people, including sexism, racism, homophobia, transphobia, denying people their right to join or create a trade union, age discrimination or discrimination based upon nationality.
|
||||
|
||||
In cases where criticism of ideology or culture remains on-topic, respectfully discuss the ideas.
|
||||
|
||||
|
|
2086
content.py
113
context.py
|
@ -1,18 +1,19 @@
|
|||
__filename__ = "inbox.py"
|
||||
__author__ = "Bob Mottram"
|
||||
__license__ = "AGPL3+"
|
||||
__version__ = "1.2.0"
|
||||
__version__ = "1.3.0"
|
||||
__maintainer__ = "Bob Mottram"
|
||||
__email__ = "bob@libreserver.org"
|
||||
__status__ = "Production"
|
||||
__module_group__ = "Security"
|
||||
|
||||
|
||||
validContexts = (
|
||||
VALID_CONTEXTS = (
|
||||
"https://www.w3.org/ns/activitystreams",
|
||||
"https://w3id.org/identity/v1",
|
||||
"https://w3id.org/security/v1",
|
||||
"*/apschema/v1.9",
|
||||
"*/apschema/v1.10",
|
||||
"*/apschema/v1.21",
|
||||
"*/apschema/v1.20",
|
||||
"*/litepub-0.1.jsonld",
|
||||
|
@ -20,37 +21,55 @@ validContexts = (
|
|||
)
|
||||
|
||||
|
||||
def hasValidContext(postJsonObject: {}) -> bool:
|
||||
def get_individual_post_context() -> []:
|
||||
"""Returns the context for an individual post
|
||||
"""
|
||||
return [
|
||||
'https://www.w3.org/ns/activitystreams',
|
||||
{
|
||||
"ostatus": "http://ostatus.org#",
|
||||
"atomUri": "ostatus:atomUri",
|
||||
"inReplyToAtomUri": "ostatus:inReplyToAtomUri",
|
||||
"conversation": "ostatus:conversation",
|
||||
"sensitive": "as:sensitive",
|
||||
"toot": "http://joinmastodon.org/ns#",
|
||||
"votersCount": "toot:votersCount",
|
||||
"blurhash": "toot:blurhash"
|
||||
}
|
||||
]
|
||||
|
||||
|
||||
def has_valid_context(post_json_object: {}) -> bool:
|
||||
"""Are the links within the @context of a post recognised?
|
||||
"""
|
||||
if not postJsonObject.get('@context'):
|
||||
if not post_json_object.get('@context'):
|
||||
return False
|
||||
if isinstance(postJsonObject['@context'], list):
|
||||
for url in postJsonObject['@context']:
|
||||
if isinstance(post_json_object['@context'], list):
|
||||
for url in post_json_object['@context']:
|
||||
if not isinstance(url, str):
|
||||
continue
|
||||
if url not in validContexts:
|
||||
wildcardFound = False
|
||||
for c in validContexts:
|
||||
if c.startswith('*'):
|
||||
c = c.replace('*', '')
|
||||
if url.endswith(c):
|
||||
wildcardFound = True
|
||||
if url not in VALID_CONTEXTS:
|
||||
wildcard_found = False
|
||||
for cont in VALID_CONTEXTS:
|
||||
if cont.startswith('*'):
|
||||
cont = cont.replace('*', '')
|
||||
if url.endswith(cont):
|
||||
wildcard_found = True
|
||||
break
|
||||
if not wildcardFound:
|
||||
if not wildcard_found:
|
||||
print('Unrecognized @context: ' + url)
|
||||
return False
|
||||
elif isinstance(postJsonObject['@context'], str):
|
||||
url = postJsonObject['@context']
|
||||
if url not in validContexts:
|
||||
wildcardFound = False
|
||||
for c in validContexts:
|
||||
if c.startswith('*'):
|
||||
c = c.replace('*', '')
|
||||
if url.endswith(c):
|
||||
wildcardFound = True
|
||||
elif isinstance(post_json_object['@context'], str):
|
||||
url = post_json_object['@context']
|
||||
if url not in VALID_CONTEXTS:
|
||||
wildcard_found = False
|
||||
for cont in VALID_CONTEXTS:
|
||||
if cont.startswith('*'):
|
||||
cont = cont.replace('*', '')
|
||||
if url.endswith(cont):
|
||||
wildcard_found = True
|
||||
break
|
||||
if not wildcardFound:
|
||||
if not wildcard_found:
|
||||
print('Unrecognized @context: ' + url)
|
||||
return False
|
||||
else:
|
||||
|
@ -138,6 +157,44 @@ def getApschemaV1_20() -> {}:
|
|||
}
|
||||
|
||||
|
||||
def getApschemaV1_10() -> {}:
|
||||
# https://domain/apschema/v1.10
|
||||
return {
|
||||
'@context': {
|
||||
'Hashtag': 'as:Hashtag',
|
||||
'PropertyValue': 'schema:PropertyValue',
|
||||
'commentPolicy': 'zot:commentPolicy',
|
||||
'conversation': 'ostatus:conversation',
|
||||
'diaspora': 'https://diasporafoundation.org/ns/',
|
||||
'directMessage': 'zot:directMessage',
|
||||
'emojiReaction': 'zot:emojiReaction',
|
||||
'expires': 'zot:expires',
|
||||
'guid': 'diaspora:guid',
|
||||
'id': '@id',
|
||||
'locationAddress': 'zot:locationAddress',
|
||||
'locationDeleted': 'zot:locationDeleted',
|
||||
'locationPrimary': 'zot:locationPrimary',
|
||||
'magicEnv': {'@id': 'zot:magicEnv', '@type': '@id'},
|
||||
'manuallyApprovesFollowers': 'as:manuallyApprovesFollowers',
|
||||
'meAlgorithm': 'zot:meAlgorithm',
|
||||
'meCreator': 'zot:meCreator',
|
||||
'meData': 'zot:meData',
|
||||
'meDataType': 'zot:meDataType',
|
||||
'meEncoding': 'zot:meEncoding',
|
||||
'meSignatureValue': 'zot:meSignatureValue',
|
||||
'nomadicHubs': 'zot:nomadicHubs',
|
||||
'nomadicLocation': 'zot:nomadicLocation',
|
||||
'nomadicLocations': {'@id': 'zot:nomadicLocations',
|
||||
'@type': '@id'},
|
||||
'ostatus': 'http://ostatus.org#',
|
||||
'schema': 'http://schema.org#',
|
||||
'type': '@type',
|
||||
'value': 'schema:value',
|
||||
'zot': 'https://hubzilla.vikshepa.com/apschema#'
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
def getApschemaV1_21() -> {}:
|
||||
# https://domain/apschema/v1.21
|
||||
return {
|
||||
|
@ -169,7 +226,7 @@ def getApschemaV1_21() -> {}:
|
|||
}
|
||||
|
||||
|
||||
def getLitepubSocial() -> {}:
|
||||
def get_litepub_social() -> {}:
|
||||
# https://litepub.social/litepub/context.jsonld
|
||||
return {
|
||||
'@context': [
|
||||
|
@ -241,7 +298,7 @@ def getLitepubV0_1() -> {}:
|
|||
}
|
||||
|
||||
|
||||
def getV1SecuritySchema() -> {}:
|
||||
def get_v1security_schema() -> {}:
|
||||
# https://w3id.org/security/v1
|
||||
return {
|
||||
"@context": {
|
||||
|
@ -294,7 +351,7 @@ def getV1SecuritySchema() -> {}:
|
|||
}
|
||||
|
||||
|
||||
def getV1Schema() -> {}:
|
||||
def get_v1schema() -> {}:
|
||||
# https://w3id.org/identity/v1
|
||||
return {
|
||||
"@context": {
|
||||
|
@ -382,7 +439,7 @@ def getV1Schema() -> {}:
|
|||
}
|
||||
|
||||
|
||||
def getActivitystreamsSchema() -> {}:
|
||||
def get_activitystreams_schema() -> {}:
|
||||
# https://www.w3.org/ns/activitystreams
|
||||
return {
|
||||
"@context": {
|
||||
|
|
127
conversation.py
|
@ -1,79 +1,100 @@
|
|||
__filename__ = "conversation.py"
|
||||
__author__ = "Bob Mottram"
|
||||
__license__ = "AGPL3+"
|
||||
__version__ = "1.2.0"
|
||||
__version__ = "1.3.0"
|
||||
__maintainer__ = "Bob Mottram"
|
||||
__email__ = "bob@libreserver.org"
|
||||
__status__ = "Production"
|
||||
__module_group__ = "Timeline"
|
||||
|
||||
import os
|
||||
from utils import hasObjectDict
|
||||
from utils import acctDir
|
||||
from utils import removeIdEnding
|
||||
from utils import has_object_dict
|
||||
from utils import acct_dir
|
||||
from utils import remove_id_ending
|
||||
from utils import text_in_file
|
||||
|
||||
|
||||
def updateConversation(baseDir: str, nickname: str, domain: str,
|
||||
postJsonObject: {}) -> bool:
|
||||
def _get_conversation_filename(base_dir: str, nickname: str, domain: str,
|
||||
post_json_object: {}) -> str:
|
||||
"""Returns the conversation filename
|
||||
"""
|
||||
if not has_object_dict(post_json_object):
|
||||
return None
|
||||
if not post_json_object['object'].get('conversation'):
|
||||
return None
|
||||
if not post_json_object['object'].get('id'):
|
||||
return None
|
||||
conversation_dir = acct_dir(base_dir, nickname, domain) + '/conversation'
|
||||
if not os.path.isdir(conversation_dir):
|
||||
os.mkdir(conversation_dir)
|
||||
conversation_id = post_json_object['object']['conversation']
|
||||
conversation_id = conversation_id.replace('/', '#')
|
||||
return conversation_dir + '/' + conversation_id
|
||||
|
||||
|
||||
def update_conversation(base_dir: str, nickname: str, domain: str,
|
||||
post_json_object: {}) -> bool:
|
||||
"""Ads a post to a conversation index in the /conversation subdirectory
|
||||
"""
|
||||
if not hasObjectDict(postJsonObject):
|
||||
conversation_filename = \
|
||||
_get_conversation_filename(base_dir, nickname, domain,
|
||||
post_json_object)
|
||||
if not conversation_filename:
|
||||
return False
|
||||
if not postJsonObject['object'].get('conversation'):
|
||||
return False
|
||||
if not postJsonObject['object'].get('id'):
|
||||
return False
|
||||
conversationDir = acctDir(baseDir, nickname, domain) + '/conversation'
|
||||
if not os.path.isdir(conversationDir):
|
||||
os.mkdir(conversationDir)
|
||||
conversationId = postJsonObject['object']['conversation']
|
||||
conversationId = conversationId.replace('/', '#')
|
||||
postId = removeIdEnding(postJsonObject['object']['id'])
|
||||
conversationFilename = conversationDir + '/' + conversationId
|
||||
if not os.path.isfile(conversationFilename):
|
||||
post_id = remove_id_ending(post_json_object['object']['id'])
|
||||
if not os.path.isfile(conversation_filename):
|
||||
try:
|
||||
with open(conversationFilename, 'w+') as fp:
|
||||
fp.write(postId + '\n')
|
||||
with open(conversation_filename, 'w+',
|
||||
encoding='utf-8') as conv_file:
|
||||
conv_file.write(post_id + '\n')
|
||||
return True
|
||||
except BaseException:
|
||||
pass
|
||||
elif postId + '\n' not in open(conversationFilename).read():
|
||||
except OSError:
|
||||
print('EX: update_conversation ' +
|
||||
'unable to write to ' + conversation_filename)
|
||||
elif not text_in_file(post_id + '\n', conversation_filename):
|
||||
try:
|
||||
with open(conversationFilename, 'a+') as fp:
|
||||
fp.write(postId + '\n')
|
||||
with open(conversation_filename, 'a+',
|
||||
encoding='utf-8') as conv_file:
|
||||
conv_file.write(post_id + '\n')
|
||||
return True
|
||||
except BaseException:
|
||||
pass
|
||||
except OSError:
|
||||
print('EX: update_conversation 2 ' +
|
||||
'unable to write to ' + conversation_filename)
|
||||
return False
|
||||
|
||||
|
||||
def muteConversation(baseDir: str, nickname: str, domain: str,
|
||||
conversationId: str) -> None:
|
||||
def mute_conversation(base_dir: str, nickname: str, domain: str,
|
||||
conversation_id: str) -> None:
|
||||
"""Mutes the given conversation
|
||||
"""
|
||||
conversationDir = acctDir(baseDir, nickname, domain) + '/conversation'
|
||||
conversationFilename = \
|
||||
conversationDir + '/' + conversationId.replace('/', '#')
|
||||
if not os.path.isfile(conversationFilename):
|
||||
conversation_dir = acct_dir(base_dir, nickname, domain) + '/conversation'
|
||||
conversation_filename = \
|
||||
conversation_dir + '/' + conversation_id.replace('/', '#')
|
||||
if not os.path.isfile(conversation_filename):
|
||||
return
|
||||
if os.path.isfile(conversationFilename + '.muted'):
|
||||
return
|
||||
with open(conversationFilename + '.muted', 'w+') as fp:
|
||||
fp.write('\n')
|
||||
|
||||
|
||||
def unmuteConversation(baseDir: str, nickname: str, domain: str,
|
||||
conversationId: str) -> None:
|
||||
"""Unmutes the given conversation
|
||||
"""
|
||||
conversationDir = acctDir(baseDir, nickname, domain) + '/conversation'
|
||||
conversationFilename = \
|
||||
conversationDir + '/' + conversationId.replace('/', '#')
|
||||
if not os.path.isfile(conversationFilename):
|
||||
return
|
||||
if not os.path.isfile(conversationFilename + '.muted'):
|
||||
if os.path.isfile(conversation_filename + '.muted'):
|
||||
return
|
||||
try:
|
||||
os.remove(conversationFilename + '.muted')
|
||||
except BaseException:
|
||||
pass
|
||||
with open(conversation_filename + '.muted', 'w+',
|
||||
encoding='utf-8') as conv_file:
|
||||
conv_file.write('\n')
|
||||
except OSError:
|
||||
print('EX: unable to write mute ' + conversation_filename)
|
||||
|
||||
|
||||
def unmute_conversation(base_dir: str, nickname: str, domain: str,
|
||||
conversation_id: str) -> None:
|
||||
"""Unmutes the given conversation
|
||||
"""
|
||||
conversation_dir = acct_dir(base_dir, nickname, domain) + '/conversation'
|
||||
conversation_filename = \
|
||||
conversation_dir + '/' + conversation_id.replace('/', '#')
|
||||
if not os.path.isfile(conversation_filename):
|
||||
return
|
||||
if not os.path.isfile(conversation_filename + '.muted'):
|
||||
return
|
||||
try:
|
||||
os.remove(conversation_filename + '.muted')
|
||||
except OSError:
|
||||
print('EX: unmute_conversation unable to delete ' +
|
||||
conversation_filename + '.muted')
|
||||
|
|
|
@ -0,0 +1,181 @@
|
|||
__filename__ = "crawlers.py"
|
||||
__author__ = "Bob Mottram"
|
||||
__license__ = "AGPL3+"
|
||||
__version__ = "1.3.0"
|
||||
__maintainer__ = "Bob Mottram"
|
||||
__email__ = "bob@libreserver.org"
|
||||
__status__ = "Production"
|
||||
__module_group__ = "Core"
|
||||
|
||||
import os
|
||||
import time
|
||||
from utils import save_json
|
||||
from utils import user_agent_domain
|
||||
from utils import remove_eol
|
||||
from blocking import update_blocked_cache
|
||||
from blocking import is_blocked_domain
|
||||
|
||||
default_user_agent_blocks = [
|
||||
'fedilist'
|
||||
]
|
||||
|
||||
|
||||
def update_known_crawlers(ua_str: str,
|
||||
base_dir: str, known_crawlers: {},
|
||||
last_known_crawler: int):
|
||||
"""Updates a dictionary of known crawlers accessing nodeinfo
|
||||
or the masto API
|
||||
"""
|
||||
if not ua_str:
|
||||
return None
|
||||
|
||||
curr_time = int(time.time())
|
||||
if known_crawlers.get(ua_str):
|
||||
known_crawlers[ua_str]['hits'] += 1
|
||||
known_crawlers[ua_str]['lastseen'] = curr_time
|
||||
else:
|
||||
known_crawlers[ua_str] = {
|
||||
"lastseen": curr_time,
|
||||
"hits": 1
|
||||
}
|
||||
|
||||
if curr_time - last_known_crawler >= 30:
|
||||
# remove any old observations
|
||||
remove_crawlers = []
|
||||
for uagent, item in known_crawlers.items():
|
||||
if curr_time - item['lastseen'] >= 60 * 60 * 24 * 30:
|
||||
remove_crawlers.append(uagent)
|
||||
for uagent in remove_crawlers:
|
||||
del known_crawlers[uagent]
|
||||
# save the list of crawlers
|
||||
save_json(known_crawlers,
|
||||
base_dir + '/accounts/knownCrawlers.json')
|
||||
return curr_time
|
||||
|
||||
|
||||
def load_known_web_bots(base_dir: str) -> []:
|
||||
"""Returns a list of known web bots
|
||||
"""
|
||||
known_bots_filename = base_dir + '/accounts/knownBots.txt'
|
||||
if not os.path.isfile(known_bots_filename):
|
||||
return []
|
||||
crawlers_str = None
|
||||
try:
|
||||
with open(known_bots_filename, 'r', encoding='utf-8') as fp_crawlers:
|
||||
crawlers_str = fp_crawlers.read()
|
||||
except OSError:
|
||||
print('EX: unable to load web bots from ' +
|
||||
known_bots_filename)
|
||||
if not crawlers_str:
|
||||
return []
|
||||
known_bots = []
|
||||
crawlers_list = crawlers_str.split('\n')
|
||||
for crawler in crawlers_list:
|
||||
if not crawler:
|
||||
continue
|
||||
crawler = remove_eol(crawler).strip()
|
||||
if not crawler:
|
||||
continue
|
||||
if crawler not in known_bots:
|
||||
known_bots.append(crawler)
|
||||
return known_bots
|
||||
|
||||
|
||||
def _save_known_web_bots(base_dir: str, known_bots: []) -> bool:
|
||||
"""Saves a list of known web bots
|
||||
"""
|
||||
known_bots_filename = base_dir + '/accounts/knownBots.txt'
|
||||
known_bots_str = ''
|
||||
for crawler in known_bots:
|
||||
known_bots_str += crawler.strip() + '\n'
|
||||
try:
|
||||
with open(known_bots_filename, 'w+', encoding='utf-8') as fp_crawlers:
|
||||
fp_crawlers.write(known_bots_str)
|
||||
except OSError:
|
||||
print("EX: unable to save known web bots to " +
|
||||
known_bots_filename)
|
||||
return False
|
||||
return True
|
||||
|
||||
|
||||
def blocked_user_agent(calling_domain: str, agent_str: str,
|
||||
news_instance: bool, debug: bool,
|
||||
user_agents_blocked: [],
|
||||
blocked_cache_last_updated,
|
||||
base_dir: str,
|
||||
blocked_cache: [],
|
||||
blocked_cache_update_secs: int,
|
||||
crawlers_allowed: [],
|
||||
known_bots: []):
|
||||
"""Should a GET or POST be blocked based upon its user agent?
|
||||
"""
|
||||
if not agent_str:
|
||||
return False, blocked_cache_last_updated
|
||||
|
||||
agent_str_lower = agent_str.lower()
|
||||
for ua_block in default_user_agent_blocks:
|
||||
if ua_block in agent_str_lower:
|
||||
print('Blocked User agent 1: ' + ua_block)
|
||||
return True, blocked_cache_last_updated
|
||||
|
||||
agent_domain = None
|
||||
|
||||
if agent_str:
|
||||
# is this a web crawler? If so then block it by default
|
||||
# unless this is a news instance or if it is in the allowed list
|
||||
bot_strings = ('bot/', 'bot-', '/bot', '/robot')
|
||||
contains_bot_string = False
|
||||
for bot_str in bot_strings:
|
||||
if bot_str in agent_str_lower:
|
||||
if '://bot' not in agent_str_lower and \
|
||||
'://robot' not in agent_str_lower:
|
||||
contains_bot_string = True
|
||||
break
|
||||
if contains_bot_string:
|
||||
if agent_str_lower not in known_bots:
|
||||
known_bots.append(agent_str_lower)
|
||||
known_bots.sort()
|
||||
_save_known_web_bots(base_dir, known_bots)
|
||||
# if this is a news instance then we want it
|
||||
# to be indexed by search engines
|
||||
if news_instance:
|
||||
return False, blocked_cache_last_updated
|
||||
# is this crawler allowed?
|
||||
for crawler in crawlers_allowed:
|
||||
if crawler.lower() in agent_str_lower:
|
||||
return False, blocked_cache_last_updated
|
||||
print('Blocked Crawler: ' + agent_str)
|
||||
return True, blocked_cache_last_updated
|
||||
# get domain name from User-Agent
|
||||
agent_domain = user_agent_domain(agent_str, debug)
|
||||
else:
|
||||
# no User-Agent header is present
|
||||
return True, blocked_cache_last_updated
|
||||
|
||||
# is the User-Agent type blocked? eg. "Mastodon"
|
||||
if user_agents_blocked:
|
||||
blocked_ua = False
|
||||
for agent_name in user_agents_blocked:
|
||||
if agent_name in agent_str:
|
||||
blocked_ua = True
|
||||
break
|
||||
if blocked_ua:
|
||||
return True, blocked_cache_last_updated
|
||||
|
||||
if not agent_domain:
|
||||
return False, blocked_cache_last_updated
|
||||
|
||||
# is the User-Agent domain blocked
|
||||
blocked_ua = False
|
||||
if not agent_domain.startswith(calling_domain):
|
||||
blocked_cache_last_updated = \
|
||||
update_blocked_cache(base_dir, blocked_cache,
|
||||
blocked_cache_last_updated,
|
||||
blocked_cache_update_secs)
|
||||
|
||||
blocked_ua = \
|
||||
is_blocked_domain(base_dir, agent_domain, blocked_cache)
|
||||
# if self.server.debug:
|
||||
if blocked_ua:
|
||||
print('Blocked User agent 2: ' + agent_domain)
|
||||
return blocked_ua, blocked_cache_last_updated
|
|
@ -0,0 +1,18 @@
|
|||
{
|
||||
"name": "Ableism",
|
||||
"warning": "Ableism",
|
||||
"description": "Discrimination and social prejudice against people with disabilities.",
|
||||
"words": [
|
||||
"crazy", "cripple", "turn a blind", "turn a deaf", "diffability",
|
||||
"differently abled", "different abilit",
|
||||
" dumb.", " dumb ", " dumb!", " dumb?", "handicap",
|
||||
"idiot", "imbecil", "insanity", "insane", " lame", "lunatic",
|
||||
"maniac", "moron", "retard", "spaz", "spastic", "specially-abled",
|
||||
"special needs", "stupid", "blind to", "bonkers",
|
||||
"wheelchair bound", "confined to a wheelchair", "deaf to",
|
||||
"deranged", "derranged", "harelip", "wacko", "whacko",
|
||||
"cretin", "feeble-minded", "mental defective", "mentally defective",
|
||||
"mongoloid", "blinded by", "double-blind"
|
||||
],
|
||||
"domains": []
|
||||
}
|
|
@ -0,0 +1,86 @@
|
|||
{
|
||||
"name": "Murdoch press",
|
||||
"warning": "Murdoch Press",
|
||||
"description": "Articles will be slanted towards right wing talking points",
|
||||
"words": [],
|
||||
"domains": [
|
||||
"api.news",
|
||||
"content.api.news",
|
||||
"newscdn.com.au",
|
||||
"resourcesssl.newscdn.com.au",
|
||||
"thesun.co.uk",
|
||||
"thetimes.co.uk",
|
||||
"thesundaytimes.co.uk",
|
||||
"pressassociation.com",
|
||||
"/news.co.uk",
|
||||
".news.co.uk",
|
||||
" news.co.uk",
|
||||
"newscorpaustralia.com",
|
||||
"theaustralian.com.au",
|
||||
"aap.com.au",
|
||||
"news.com.au",
|
||||
"skynews.com.au",
|
||||
"skyweather.com.au",
|
||||
"australiannewschannel.com.au",
|
||||
"weeklytimesnow.com.au",
|
||||
"dailytelegraph.com.au",
|
||||
"heraldsun.com.au",
|
||||
"geelongadvertiser.com.au",
|
||||
"couriermail.com.au",
|
||||
"thesundaymail.com.au",
|
||||
"goldcoastbulletin.com.au",
|
||||
"cairnspost.com.au",
|
||||
"townsvillebulletin.com.au",
|
||||
"adelaidenow.com.au",
|
||||
"themercury.com.au",
|
||||
"ntnews.com.au",
|
||||
"postcourier.com.pg",
|
||||
"nypost.com",
|
||||
"pagesix.com",
|
||||
"realtor.com",
|
||||
"wsj.com",
|
||||
"wsj.net",
|
||||
"foxnews.com",
|
||||
"fncstatic.com",
|
||||
"foxnewsgo.com",
|
||||
".fox.com",
|
||||
"/fox.com",
|
||||
"foxbusiness.com",
|
||||
"foxsports.com",
|
||||
"fssta.com",
|
||||
"foxsports.com.au",
|
||||
"dowjones.com",
|
||||
"factiva.com",
|
||||
"barrons.com",
|
||||
"marketwatch.com",
|
||||
"heatst.com",
|
||||
"fnlondon.com",
|
||||
"mansionglobal.com",
|
||||
"spindices.com",
|
||||
"spglobal.com",
|
||||
"talksport.com",
|
||||
"harpercollins.com",
|
||||
"bestrecipes.com.au",
|
||||
"hipages.com.au",
|
||||
"homeimprovementpages.com.au",
|
||||
"odds.com.au",
|
||||
"onebigswitch.com.au",
|
||||
"suddenly.com.au",
|
||||
"supercoach.com.au",
|
||||
"punters.com.au",
|
||||
"kayosports.com.au",
|
||||
"foxtel.com.au",
|
||||
"newscorp.com",
|
||||
"storyful.com",
|
||||
"vogue.com.au",
|
||||
"taste.com.au",
|
||||
"kidspot.com.au",
|
||||
"bodyandsoul.com.au",
|
||||
"realcommercial.com.au",
|
||||
"reastatic.net",
|
||||
"realestate.com.au",
|
||||
"whereilive.com.au",
|
||||
"www.newsoftheworld.co.uk",
|
||||
"newsoftheworld.co.uk"
|
||||
]
|
||||
}
|
|
@ -0,0 +1,22 @@
|
|||
{
|
||||
"name": "Russian State Media",
|
||||
"warning": "Russian State Media",
|
||||
"description": "Funded by the Russian government, or promoting Russian government talking points",
|
||||
"words": [],
|
||||
"domains": [
|
||||
".rt.com",
|
||||
"redfish.media",
|
||||
"ruptly.tv",
|
||||
"sputniknews.com",
|
||||
"theduran.com",
|
||||
"peacedata.net",
|
||||
"russia-insider.com",
|
||||
"snanews.de",
|
||||
"sputniknews.com",
|
||||
"inforos.ru",
|
||||
"usareally.com",
|
||||
"strategic-culture.org",
|
||||
"pravdareport.com",
|
||||
"checkpointasia.net"
|
||||
]
|
||||
}
|
|
@ -0,0 +1,121 @@
|
|||
{
|
||||
"name": "Satire",
|
||||
"warning": "Satire",
|
||||
"description": "Intended to be humorous. Not real news stories.",
|
||||
"words": [],
|
||||
"domains": [
|
||||
"alhudood.net",
|
||||
"adobochronicles.com",
|
||||
"alternativelyfacts.com",
|
||||
"alternative-science.com",
|
||||
"americaslastlineofdefense.com",
|
||||
"babylonbee.com",
|
||||
"bluenewsnetwork.com",
|
||||
"borowitzreport.com",
|
||||
"breakingburgh.com",
|
||||
"bullshitnews.org",
|
||||
"bustatroll.org",
|
||||
"burrardstreetjournal.com",
|
||||
"clickhole.com",
|
||||
"confederacyofdrones.com",
|
||||
"conservativetears.com",
|
||||
"cracked.com",
|
||||
"dailybonnet.com",
|
||||
"dailysquib.co.uk",
|
||||
"dailyworldupdate.us",
|
||||
"dailysnark.com",
|
||||
"der-postillon.com",
|
||||
"derfmagazine.com",
|
||||
"elchiguirebipolar.net",
|
||||
"elmundotoday.com",
|
||||
"speld.nl",
|
||||
"duffelblog.com",
|
||||
"duhprogressive.com",
|
||||
"elkoshary.com",
|
||||
"empirenews.net",
|
||||
"empiresports.co",
|
||||
"eveningharold.com",
|
||||
"fark.com",
|
||||
"fmobserver.com",
|
||||
"fognews.ru",
|
||||
"frankmag.ca",
|
||||
"framleyexaminer.com",
|
||||
"freedomcrossroads.com",
|
||||
"freedomfictions.com",
|
||||
"genesiustimes.com",
|
||||
"gishgallop.com",
|
||||
"gomerblog.com",
|
||||
"harddawn.com",
|
||||
"huzlers.com",
|
||||
"www.imao.us",
|
||||
"infobattle.org",
|
||||
"islamicanews.com",
|
||||
"chronicle.su",
|
||||
"landoverbaptist.org",
|
||||
"larknews.com",
|
||||
"legorafi.fr",
|
||||
"lercio.it",
|
||||
"madhousemagazine.com",
|
||||
"mcsweeneys.net",
|
||||
"moronmajority.com",
|
||||
"nationalreport.net",
|
||||
"newsbiscuit.com",
|
||||
"newsmutiny.com",
|
||||
"newsthump.com",
|
||||
"npcdaily.com",
|
||||
"prettycoolsite.com",
|
||||
"private-eye.co.uk",
|
||||
"realnewsrightnow.com",
|
||||
"realrawnews.com",
|
||||
"reductress.com",
|
||||
"sanctumnews.com",
|
||||
"satirev.org",
|
||||
"sportspickle.com",
|
||||
"stiltonsplace.blogspot.com",
|
||||
"stubhillnews.com",
|
||||
"stuppid.com",
|
||||
"suffolkgazette.com",
|
||||
"sundaysportonline.co.uk",
|
||||
"thatsprettygoodscience.com",
|
||||
"atlbanana.com",
|
||||
"thebeaverton.com",
|
||||
"betootaadvocate.com",
|
||||
"chaser.com.au",
|
||||
"dailydiscord.com",
|
||||
"thedailymash.co.uk",
|
||||
"halfwaypost.com",
|
||||
"thehardtimes.net",
|
||||
"humortimes.com",
|
||||
"satirewire.com",
|
||||
"scrappleface.com",
|
||||
"thelemonpress.co.uk",
|
||||
"themideastbeast.com",
|
||||
"theneedling.com",
|
||||
"theonion.com",
|
||||
"theoxymoron.co.uk",
|
||||
"thepeoplescube.com",
|
||||
"thepoke.co.uk",
|
||||
"therightists.com",
|
||||
"rochdaleherald.co.uk",
|
||||
"politicalgarbagechute.com",
|
||||
"the-postillon.com",
|
||||
"thecivilian.co.nz",
|
||||
"thedailyer.com",
|
||||
"thedailywtf.com",
|
||||
"theredshtick.com",
|
||||
"thesciencepost.com",
|
||||
"theshovel.com.au",
|
||||
"thespoof.com",
|
||||
"thestonkmarket.com",
|
||||
"thereisnews.com",
|
||||
"tittletattle365.com",
|
||||
"truenorthtimes.ca",
|
||||
"truthbrary.org",
|
||||
"walkingeaglenews.com",
|
||||
"waterfordwhispersnews.com",
|
||||
"weeklyworldnews.com",
|
||||
"wokennews.com",
|
||||
"worldnewsdailyreport.com",
|
||||
"zaytung.com"
|
||||
]
|
||||
}
|
|
@ -0,0 +1,25 @@
|
|||
{
|
||||
"name": "UK RW Think Tanks",
|
||||
"warning": "UK RW Think Tank",
|
||||
"description": "Biased towards UK right wing agendas",
|
||||
"words": [
|
||||
"Adam Smith Institute",
|
||||
"Bow Group",
|
||||
"Centre for Policy Studies",
|
||||
"Centre for Social Justice",
|
||||
"Chatham House",
|
||||
"Institute of Economic Affairs",
|
||||
"Legatum Institute",
|
||||
"Policy Exchange"
|
||||
],
|
||||
"domains": [
|
||||
"adamsmith.org",
|
||||
"bowgroup.org",
|
||||
"cps.org.uk",
|
||||
"centreforsocialjustice.org.uk",
|
||||
"chathamhouse.org",
|
||||
"iea.org.uk",
|
||||
"https://li.com",
|
||||
"policyexchange.org.uk"
|
||||
]
|
||||
}
|
111
cwtch.py
|
@ -1,92 +1,115 @@
|
|||
__filename__ = "cwtch.py"
|
||||
__author__ = "Bob Mottram"
|
||||
__license__ = "AGPL3+"
|
||||
__version__ = "1.2.0"
|
||||
__version__ = "1.3.0"
|
||||
__maintainer__ = "Bob Mottram"
|
||||
__email__ = "bob@libreserver.org"
|
||||
__status__ = "Production"
|
||||
__module_group__ = "Profile Metadata"
|
||||
|
||||
import re
|
||||
from utils import get_attachment_property_value
|
||||
|
||||
|
||||
def getCwtchAddress(actorJson: {}) -> str:
|
||||
def get_cwtch_address(actor_json: {}) -> str:
|
||||
"""Returns cwtch address for the given actor
|
||||
"""
|
||||
if not actorJson.get('attachment'):
|
||||
if not actor_json.get('attachment'):
|
||||
return ''
|
||||
for propertyValue in actorJson['attachment']:
|
||||
if not propertyValue.get('name'):
|
||||
for property_value in actor_json['attachment']:
|
||||
name_value = None
|
||||
if property_value.get('name'):
|
||||
name_value = property_value['name']
|
||||
elif property_value.get('schema:name'):
|
||||
name_value = property_value['schema:name']
|
||||
if not name_value:
|
||||
continue
|
||||
if not propertyValue['name'].lower().startswith('cwtch'):
|
||||
if not name_value.lower().startswith('cwtch'):
|
||||
continue
|
||||
if not propertyValue.get('type'):
|
||||
if not property_value.get('type'):
|
||||
continue
|
||||
if not propertyValue.get('value'):
|
||||
prop_value_name, prop_value = \
|
||||
get_attachment_property_value(property_value)
|
||||
if not prop_value:
|
||||
continue
|
||||
if propertyValue['type'] != 'PropertyValue':
|
||||
if not property_value['type'].endswith('PropertyValue'):
|
||||
continue
|
||||
propertyValue['value'] = propertyValue['value'].strip()
|
||||
if len(propertyValue['value']) < 2:
|
||||
property_value[prop_value_name] = \
|
||||
property_value[prop_value_name].strip()
|
||||
if len(property_value[prop_value_name]) < 2:
|
||||
continue
|
||||
if '"' in propertyValue['value']:
|
||||
if '"' in property_value[prop_value_name]:
|
||||
continue
|
||||
if ' ' in propertyValue['value']:
|
||||
if ' ' in property_value[prop_value_name]:
|
||||
continue
|
||||
if ',' in propertyValue['value']:
|
||||
if ',' in property_value[prop_value_name]:
|
||||
continue
|
||||
if '.' in propertyValue['value']:
|
||||
if '.' in property_value[prop_value_name]:
|
||||
continue
|
||||
return propertyValue['value']
|
||||
return property_value[prop_value_name]
|
||||
return ''
|
||||
|
||||
|
||||
def setCwtchAddress(actorJson: {}, cwtchAddress: str) -> None:
|
||||
def set_cwtch_address(actor_json: {}, cwtch_address: str) -> None:
|
||||
"""Sets an cwtch address for the given actor
|
||||
"""
|
||||
notCwtchAddress = False
|
||||
not_cwtch_address = False
|
||||
|
||||
if len(cwtchAddress) < 56:
|
||||
notCwtchAddress = True
|
||||
if cwtchAddress != cwtchAddress.lower():
|
||||
notCwtchAddress = True
|
||||
if not re.match("^[a-z0-9]*$", cwtchAddress):
|
||||
notCwtchAddress = True
|
||||
if len(cwtch_address) < 56:
|
||||
not_cwtch_address = True
|
||||
if cwtch_address != cwtch_address.lower():
|
||||
not_cwtch_address = True
|
||||
if not re.match("^[a-z0-9]*$", cwtch_address):
|
||||
not_cwtch_address = True
|
||||
|
||||
if not actorJson.get('attachment'):
|
||||
actorJson['attachment'] = []
|
||||
if not actor_json.get('attachment'):
|
||||
actor_json['attachment'] = []
|
||||
|
||||
# remove any existing value
|
||||
propertyFound = None
|
||||
for propertyValue in actorJson['attachment']:
|
||||
if not propertyValue.get('name'):
|
||||
property_found = None
|
||||
for property_value in actor_json['attachment']:
|
||||
name_value = None
|
||||
if property_value.get('name'):
|
||||
name_value = property_value['name']
|
||||
elif property_value.get('schema:name'):
|
||||
name_value = property_value['schema:name']
|
||||
if not name_value:
|
||||
continue
|
||||
if not propertyValue.get('type'):
|
||||
if not property_value.get('type'):
|
||||
continue
|
||||
if not propertyValue['name'].lower().startswith('cwtch'):
|
||||
if not name_value.lower().startswith('cwtch'):
|
||||
continue
|
||||
propertyFound = propertyValue
|
||||
property_found = property_value
|
||||
break
|
||||
if propertyFound:
|
||||
actorJson['attachment'].remove(propertyFound)
|
||||
if notCwtchAddress:
|
||||
if property_found:
|
||||
actor_json['attachment'].remove(property_found)
|
||||
if not_cwtch_address:
|
||||
return
|
||||
|
||||
for propertyValue in actorJson['attachment']:
|
||||
if not propertyValue.get('name'):
|
||||
for property_value in actor_json['attachment']:
|
||||
name_value = None
|
||||
if property_value.get('name'):
|
||||
name_value = property_value['name']
|
||||
elif property_value.get('schema:name'):
|
||||
name_value = property_value['schema:name']
|
||||
if not name_value:
|
||||
continue
|
||||
if not propertyValue.get('type'):
|
||||
if not property_value.get('type'):
|
||||
continue
|
||||
if not propertyValue['name'].lower().startswith('cwtch'):
|
||||
if not name_value.lower().startswith('cwtch'):
|
||||
continue
|
||||
if propertyValue['type'] != 'PropertyValue':
|
||||
if not property_value['type'].endswith('PropertyValue'):
|
||||
continue
|
||||
propertyValue['value'] = cwtchAddress
|
||||
prop_value_name, _ = \
|
||||
get_attachment_property_value(property_value)
|
||||
if not prop_value_name:
|
||||
continue
|
||||
property_value[prop_value_name] = cwtch_address
|
||||
return
|
||||
|
||||
newCwtchAddress = {
|
||||
new_cwtch_address = {
|
||||
"name": "Cwtch",
|
||||
"type": "PropertyValue",
|
||||
"value": cwtchAddress
|
||||
"value": cwtch_address
|
||||
}
|
||||
actorJson['attachment'].append(newCwtchAddress)
|
||||
actor_json['attachment'].append(new_cwtch_address)
|
||||
|
|
|
@ -2,8 +2,5 @@
|
|||
### Origin Story
|
||||
How your instance began.
|
||||
|
||||
### Lore
|
||||
Customs and rituals.
|
||||
|
||||
### Epic Tales
|
||||
Heroic deeds and dastardly foes.
|
||||
### Research Uses
|
||||
The administrator of this instance does not agree to participate in human subjects research, or research to study website policies and practices, carried out by academic institutions or their executors without prior written consent.
|
||||
|
|
|
@ -0,0 +1,15 @@
|
|||
X-pilled, alt-right terminology
|
||||
soy boy, alt-right terminology
|
||||
soyboy, alt-right terminology
|
||||
soyboi, alt-right terminology
|
||||
kek, alt-right terminology
|
||||
groyper, alt-right meme
|
||||
chad, alt-right meme
|
||||
globalist*, antisemitism
|
||||
globalism, antisemitism
|
||||
fren, alt-right terminology
|
||||
cuck*, alt-right terminology
|
||||
*1488, nazism
|
||||
nazbol, nazism
|
||||
*1290, antisemitism
|
||||
|
|
@ -11,7 +11,7 @@ Posts can be removed on request if there is sufficient justification, but the na
|
|||
### Content Policy
|
||||
This instance will not host content containing sexism, racism, casteism, homophobia, transphobia, misogyny, antisemitism or other forms of bigotry or discrimination on the basis of nationality or immigration status. Claims that transgressions of this type were intended to be "ironic" will be treated as a terms of service violation.
|
||||
|
||||
Even if not conspicuously discriminatory, expressions of support for organizations with discrminatory agendas are not permitted on this instance. These include, but are not limited to, racial supremacist groups, the redpill/incel movement and anti-LGBT or anti-immigrant campaigns.
|
||||
Even if not conspicuously discriminatory, expressions of support for organizations with discrminatory agendas are not permitted on this instance. These include, but are not limited to, racial supremacist groups, the redpill/incel movement, anti-vaccination, anti-LGBT and anti-immigrant campaigns.
|
||||
|
||||
Depictions of injury, death or medical procedures are not permitted.
|
||||
|
||||
|
@ -24,13 +24,15 @@ Moderators rely upon your reports. Don't assume that something of concern has al
|
|||
Content found to be non-compliant with this policy will be removed and any accounts on this instance producing, repeating or linking to such content will be deleted typically without prior notification.
|
||||
|
||||
### Federation Policy
|
||||
In a proactive effort to avoid the classic fate of *"embrace, extend, extinguish"* this system will block any instance launched, acquired or funded by Alphabet, Facebook, Twitter, Microsoft, Apple, Amazon, Elsevier or other monopolistic Silicon Valley companies.
|
||||
In a proactive effort to avoid the classic fate of *"embrace, extend, extinguish"* this system will block any instance launched, acquired or funded by Alphabet/Google, Facebook/Meta, Twitter, Microsoft, Apple, Amazon, Elsevier or other monopolistic Silicon Valley companies.
|
||||
|
||||
This system will not federate with instances whose moderation policy is incompatible with the content policy described above. If an instance lacks a moderation policy, or refuses to enforce one, it will be assumed to be incompatible.
|
||||
|
||||
### Use of User Generated Content for Research
|
||||
### Use by Research Organizations and Universities
|
||||
Data may not be "scraped" or otherwise obtained from this instance and used for academic research or cited within research publications without the prior written permission of the administrator. Financial remedy will be sought through the courts from any researcher publishing data obtained from this instance without consent.
|
||||
|
||||
The administrator of this instance does not agree to participate in human subjects research, or research to study website policies and practices, carried out by academic institutions or their executors without prior written consent.
|
||||
|
||||
### Commercial Use
|
||||
Commercial use of original content on this instance is strictly forbidden without the prior written permission of individual account holders. The instance administrator does not hold copyright on any original content posted by account holders. Publication or federation of content does not imply permission for commercial use.
|
||||
|
||||
|
|
|
@ -0,0 +1,18 @@
|
|||
### অভিনন্দন!
|
||||
আপনি এখন Epicyon ব্যবহার শুরু করতে প্রস্তুত। এটি একটি সংযত সামাজিক স্থান, তাই দয়া করে আমাদের [পরিষেবার শর্তাবলী](/terms) মেনে চলা নিশ্চিত করুন এবং মজা করুন।
|
||||
|
||||
#### ইঙ্গিত
|
||||
**ম্যাগনিফায়ার** আইকনটি ব্যবহার করুন
|
||||
|
||||
স্ক্রিনের **শীর্ষে ব্যানার** নির্বাচন করলে টাইমলাইন ভিউ এবং আপনার প্রোফাইলের মধ্যে স্যুইচ হয়।
|
||||
|
||||
পোস্টগুলি এলে স্ক্রীন স্বয়ংক্রিয়ভাবে রিফ্রেশ হবে না, তাই রিফ্রেশ করতে **F5** বা **ইনবক্স** বোতামটি ব্যবহার করুন৷
|
||||
|
||||
#### উত্তরণের আচার
|
||||
কর্পোরেট সংস্কৃতি আপনাকে সর্বাধিক সংখ্যক অনুসারী এবং পছন্দ করতে প্রশিক্ষণ দেয় - মনোযোগ আকর্ষণের জন্য ব্যক্তিগত খ্যাতি এবং অগভীর, আক্রোশ-প্ররোচিত মিথস্ক্রিয়া খোঁজার জন্য।
|
||||
|
||||
তাই আপনি যদি সেই সংস্কৃতি থেকে আসছেন, অনুগ্রহ করে সচেতন থাকুন যে এটি একটি ভিন্ন ধরনের সিস্টেম যার প্রত্যাশার একটি ভিন্ন সেট।
|
||||
|
||||
প্রচুর ফলোয়ার থাকা জরুরী নয় এবং প্রায়শই এটি অবাঞ্ছিত। লোকেরা আপনাকে ব্লক করতে পারে, এবং এটি ঠিক আছে। শ্রোতা হওয়ার অধিকার কারো নেই। যদি কেউ আপনাকে ব্লক করে তাহলে আপনাকে সেন্সর করা হচ্ছে না। মানুষ শুধু তাদের স্বাধীনতার ব্যায়াম করছে যার সাথে ইচ্ছা মেলাতে।
|
||||
|
||||
ব্যক্তিগত আচরণের মান কর্পোরেট সিস্টেমের তুলনায় ভাল হবে বলে আশা করা হচ্ছে। আপনার আচরণের এই উদাহরণের খ্যাতির জন্যও পরিণতি রয়েছে। আপনি যদি পরিষেবার শর্তাবলীর বিরুদ্ধে যায় এমন একটি অবিবেচনাপূর্ণ আচরণ করেন তবে আপনার অ্যাকাউন্ট স্থগিত বা সরানো হতে পারে।
|
|
@ -0,0 +1,18 @@
|
|||
### Congratulations!
|
||||
You are now ready to begin using Epicyon. This is a moderated social space, so please make sure to abide by our [terms of service](/terms), and have fun.
|
||||
|
||||
#### Hints
|
||||
Use the **magnifier** icon 🔍 to search for fediverse handles and follow people.
|
||||
|
||||
Selecting the **banner at the top** of the screen switches between timeline view and your profile.
|
||||
|
||||
The screen will not automatically refresh when posts arrive, so use **F5** or the **Inbox** button to refresh.
|
||||
|
||||
#### Rite of Passage
|
||||
Corporate culture trains you to want the maximum number of followers and likes - to seek personal fame and shallow, outrage-inducing interactions to grab attention.
|
||||
|
||||
So if you are coming from that culture, please be aware that this is a different type of system with a very different set of expectations.
|
||||
|
||||
Having a lot of followers is not necessary, and often it's undesirable. People may block you, and that's ok. Nobody has a right to an audience. If someone blocks you then you're not being censored. People are just exercising their freedom to associate with whoever they wish.
|
||||
|
||||
Standards of personal behavior are expected to be better than in the corporate systems. Your behavior also has consequences for the reputation of this instance. If you behave in an inconsiderate manner which goes against the terms of service then your account may be suspended or removed.
|
|
@ -0,0 +1,18 @@
|
|||
### 축하합니다!
|
||||
이제 Epicyon을 사용할 준비가 되었습니다. 이것은 조정된 소셜 공간이므로 [서비스 약관](/terms)을 준수하고 즐기십시오.
|
||||
|
||||
#### 힌트
|
||||
**돋보기** 아이콘 🔍을 사용하여 fediverse 핸들을 검색하고 사람을 팔로우하세요.
|
||||
|
||||
화면 상단의 **배너**를 선택하면 타임라인 보기와 프로필 간에 전환됩니다.
|
||||
|
||||
게시물이 도착하면 화면이 자동으로 새로고침되지 않으므로 **F5** 또는 **받은편지함** 버튼을 눌러 새로고침하세요.
|
||||
|
||||
#### 통과 의례
|
||||
기업 문화는 관심을 끌기 위해 개인적인 명성과 천박하고 분노를 유발하는 상호 작용을 추구하기 위해 최대 수의 팔로워와 좋아요를 원하도록 훈련합니다.
|
||||
|
||||
따라서 당신이 그 문화에서 왔다면 이것은 매우 다른 기대치를 가진 다른 유형의 시스템이라는 것을 알아두시기 바랍니다.
|
||||
|
||||
많은 팔로워를 갖는 것은 필요하지 않으며 종종 바람직하지 않습니다. 사람들이 당신을 차단할 수 있지만 괜찮습니다. 누구에게도 관객의 권리는 없습니다. 누군가가 당신을 차단하면 검열되지 않습니다. 사람들은 원하는 사람과 교제할 수 있는 자유를 행사할 뿐입니다.
|
||||
|
||||
개인 행동의 기준은 기업 시스템보다 더 나을 것으로 기대됩니다. 귀하의 행동은 이 인스턴스의 평판에도 영향을 미칩니다. 귀하가 서비스 약관에 위배되는 무례한 행동을 하는 경우 귀하의 계정이 정지되거나 제거될 수 있습니다.
|
|
@ -0,0 +1,18 @@
|
|||
### Gefeliciteerd!
|
||||
U bent nu klaar om Epicyon te gaan gebruiken. Dit is een gemodereerde sociale ruimte, dus zorg ervoor dat u zich houdt aan onze [servicevoorwaarden](/terms), en veel plezier.
|
||||
|
||||
#### Tips
|
||||
Gebruik het **vergrootglas**-pictogram 🔍 om naar fediverse handvatten te zoeken en mensen te volgen.
|
||||
|
||||
Als u de **banner bovenaan** op het scherm selecteert, schakelt u tussen de tijdlijnweergave en uw profiel.
|
||||
|
||||
Het scherm wordt niet automatisch vernieuwd wanneer berichten binnenkomen, dus gebruik **F5** of de **Inbox**-knop om te vernieuwen.
|
||||
|
||||
#### Toegangsritueel
|
||||
Bedrijfscultuur traint je om het maximale aantal volgers en likes te willen - om persoonlijke roem en oppervlakkige, verontwaardiging opwekkende interacties te zoeken om de aandacht te trekken.
|
||||
|
||||
Dus als je uit die cultuur komt, houd er dan rekening mee dat dit een ander type systeem is met heel andere verwachtingen.
|
||||
|
||||
Veel volgers hebben is niet nodig, en vaak ook onwenselijk. Mensen kunnen je blokkeren, en dat is oké. Niemand heeft recht op een publiek. Als iemand je blokkeert, word je niet gecensureerd. Mensen maken gewoon gebruik van hun vrijheid om zich te associëren met wie ze maar willen.
|
||||
|
||||
Normen voor persoonlijk gedrag zullen naar verwachting beter zijn dan in de bedrijfssystemen. Uw gedrag heeft ook gevolgen voor de reputatie van deze instantie. Als u zich op een onachtzame manier gedraagt die in strijd is met de servicevoorwaarden, kan uw account worden opgeschort of verwijderd.
|
|
@ -0,0 +1,18 @@
|
|||
### Gratulacje!
|
||||
Jesteś teraz gotowy do rozpoczęcia korzystania z Epicyona. To moderowana przestrzeń społecznościowa, więc przestrzegaj naszych [warunków korzystania z usługi](/terms) i baw się dobrze.
|
||||
|
||||
#### Poradnik
|
||||
Użyj ikony **lupy** 🔍, by wyszukiwać różne uchwyty i obserwować osoby.
|
||||
|
||||
Wybranie **banera u góry** ekranu przełącza między widokiem osi czasu a Twoim profilem.
|
||||
|
||||
Ekran nie odświeża się automatycznie po otrzymaniu postów, więc użyj **F5** lub przycisku **Odebrane**, aby odświeżyć.
|
||||
|
||||
#### Rytuał przejścia
|
||||
Kultura korporacyjna uczy, jak chcieć maksymalnej liczby obserwujących i polubień - szukać osobistej sławy i płytkich, wywołujących oburzenie interakcji, aby przyciągnąć uwagę.
|
||||
|
||||
Więc jeśli wywodzisz się z tej kultury, pamiętaj, że jest to inny rodzaj systemu z bardzo różnymi oczekiwaniami.
|
||||
|
||||
Posiadanie wielu obserwujących nie jest konieczne, a często jest niepożądane. Ludzie mogą cię blokować i to jest w porządku. Nikt nie ma prawa do publiczności. Jeśli ktoś cię blokuje, nie jesteś cenzurowany. Ludzie po prostu korzystają ze swojej wolności kojarzenia się z kimkolwiek chcą.
|
||||
|
||||
Oczekuje się, że standardy zachowania osobistego będą lepsze niż w systemach korporacyjnych. Twoje zachowanie ma również konsekwencje dla reputacji tej instancji. Jeśli zachowujesz się w sposób nierozważny, który jest sprzeczny z warunkami korzystania z usługi, Twoje konto może zostać zawieszone lub usunięte.
|
|
@ -0,0 +1,18 @@
|
|||
### Tebrikler!
|
||||
Artık Epicyon'u kullanmaya başlamaya hazırsınız. Bu, denetlenen bir sosyal alandır, bu nedenle lütfen [hizmet şartlarımıza](/terms) uyduğunuzdan ve eğlendiğinizden emin olun.
|
||||
|
||||
#### İpuçları
|
||||
**büyüteç** simgesini 🔍 kullanarak federasyon tutamaçlarını arayın ve insanları takip edin.
|
||||
|
||||
Ekranın üst kısmındaki **başlığı** seçmek, zaman çizelgesi görünümü ile profiliniz arasında geçiş yapar.
|
||||
|
||||
Gönderiler geldiğinde ekran otomatik olarak yenilenmeyecektir, bu nedenle yenilemek için **F5** veya **Gelen Kutusu** düğmesini kullanın.
|
||||
|
||||
#### Geçiş ayini
|
||||
Kurumsal kültür, sizi maksimum sayıda takipçi ve beğeni istemeye - kişisel şöhret ve dikkat çekmek için sığ, öfke uyandıran etkileşimler aramaya - eğitir.
|
||||
|
||||
Dolayısıyla, o kültürden geliyorsanız, lütfen bunun çok farklı beklentilere sahip farklı bir sistem türü olduğunun farkında olun.
|
||||
|
||||
Çok sayıda takipçiye sahip olmak gerekli değildir ve genellikle istenmeyen bir durumdur. İnsanlar sizi engelleyebilir ve sorun değil. Kimsenin seyirci hakkı yoktur. Biri sizi engellerse sansürlenmiyorsunuz demektir. İnsanlar sadece istedikleri kişiyle ilişki kurmak için özgürlüklerini kullanıyorlar.
|
||||
|
||||
Kişisel davranış standartlarının kurumsal sistemlerden daha iyi olması beklenir. Davranışınızın da bu örneğin itibarı için sonuçları vardır. Hizmet şartlarına aykırı düşüncesizce davranırsanız, hesabınız askıya alınabilir veya kaldırılabilir.
|
|
@ -0,0 +1,18 @@
|
|||
### Вітаю!
|
||||
Тепер ви готові почати використовувати Epicyon. Це модерований соціальний простір, тому дотримуйтеся наших [умов надання послуг](/terms) і отримуйте задоволення.
|
||||
|
||||
#### Підказки
|
||||
Використовуйте значок **лупи** 🔍, щоб шукати різноманітні ручки та слідкувати за людьми.
|
||||
|
||||
Вибір **банера вгорі** екрана перемикається між переглядом часової шкали та вашим профілем.
|
||||
|
||||
Екран не оновлюватиметься автоматично, коли надходять повідомлення, тому використовуйте **F5** або кнопку **Вхідні**, щоб оновити.
|
||||
|
||||
#### Обряд посвячення
|
||||
Корпоративна культура навчає вас хотіти мати максимальну кількість підписників і лайків – шукати особистої слави та неглибоких, що викликають обурення взаємодії, щоб привернути увагу.
|
||||
|
||||
Тому, якщо ви походите з цієї культури, будь ласка, майте на увазі, що це інший тип системи з зовсім іншим набором очікувань.
|
||||
|
||||
Мати багато підписників не обов’язково, а часто і небажано. Люди можуть заблокувати вас, і це нормально. Ніхто не має права на аудиторію. Якщо хтось блокує вас, це означає, що вас не піддають цензурі. Люди просто користуються своєю свободою спілкуватися з ким захочуть.
|
||||
|
||||
Очікується, що стандарти особистої поведінки будуть кращими, ніж у корпоративних системах. Ваша поведінка також має наслідки для репутації цього екземпляра. Якщо ви поводитеся необережно, що суперечить умовам обслуговування, ваш обліковий запис може бути призупинено або видалено.
|
|
@ -0,0 +1,18 @@
|
|||
### מאַזל - טאָוו!
|
||||
איר זענט איצט גרייט צו אָנהייבן ניצן Epicyon. דאָס איז אַ מאַדערייטיד געזעלשאַפטלעך פּלאַץ, אַזוי ביטע מאַכן זיכער צו האַלטן אונדזער [טערמין פון דינסט] (/ טערמינען), און האָבן שפּאַס.
|
||||
|
||||
#### הינץ
|
||||
ניצן די **מאַגניפיער** ייקאַן יקאַפּי צו זוכן פֿאַר פעדייווערס כאַנדאַלז און נאָכגיין מענטשן.
|
||||
|
||||
סעלעקטינג די **פאָן אין דער שפּיץ** פון דעם עקראַן סוויטשיז צווישן די טיימליין מיינונג און דיין פּראָפיל.
|
||||
|
||||
דער עקראַן וועט נישט אויטאָמאַטיש דערפרישן ווען אַרטיקלען אָנקומען, אַזוי נוצן **F5** אָדער די **Inbox** קנעפּל צו דערפרישן.
|
||||
|
||||
#### רייט פון דורכפאָר
|
||||
פֿירמע קולטור טריינז איר צו וועלן די מאַקסימום נומער פון אנהענגערס און לייקס - צו זוכן פּערזענלעך רום און פּליטקע, סקאַנדאַל-ינדוסינג ינטעראַקשאַנז צו כאַפּן ופמערקזאַמקייט.
|
||||
|
||||
אַזוי אויב איר קומען פֿון דער קולטור, ביטע זיין אַווער אַז דאָס איז אַ אַנדערש טיפּ פון סיסטעם מיט אַ זייער אַנדערש גאַנג פון עקספּעקטיישאַנז.
|
||||
|
||||
עס איז ניט נייטיק צו האָבן אַ פּלאַץ פון אנהענגערס, און אָפט עס איז אַנדיזייראַבאַל. מענטשן קענען פאַרשפּאַרן איר, און דאָס איז גוט. קיינער האָט נישט קיין רעכט צו אַ וילעם. אויב עמעצער בלאַקס איר, איר זענט נישט סענסערד. מענטשן זענען נאָר עקסערסייזינג זייער פרייהייט צו פאַרבינדן מיט ווער זיי וועלן.
|
||||
|
||||
סטאַנדאַרדס פון פּערזענלעך נאַטור זענען דערוואַרט צו זיין בעסער ווי אין די פֿירמע סיסטעמען. דיין נאַטור אויך האט קאַנסאַקווענסאַז פֿאַר די שעם פון דעם בייַשפּיל. אויב איר ביכייווז אין אַן ינקאַנסעראַט שטייגער וואָס גייט קעגן די טערמינען פון דינסט, דיין חשבון קען זיין סוספּענדעד אָדער אַוועקגענומען.
|
|
@ -0,0 +1,3 @@
|
|||
একটি কালানুক্রমিক টাইমলাইন হিসাবে সরাসরি বার্তাগুলি এখানে উপস্থিত হবে৷
|
||||
|
||||
স্প্যাম এড়াতে এবং নিরাপত্তা উন্নত করতে, ডিফল্টরূপে আপনি শুধুমাত্র সরাসরি বার্তা পেতে সক্ষম হবেন *আপনি যাদের অনুসরণ করছেন তাদের থেকে*। উপরের **ব্যানার** এবং তারপরে **সম্পাদনা** আইকনটি নির্বাচন করে আপনি প্রয়োজনে আপনার প্রোফাইল সেটিংসের মধ্যে এটি বন্ধ করতে পারেন৷
|
|
@ -0,0 +1,3 @@
|
|||
Τα άμεσα μηνύματα θα εμφανίζονται εδώ, ως χρονολογική γραμμή χρόνου.
|
||||
|
||||
Για να αποφύγετε τα ανεπιθύμητα μηνύματα και να βελτιώσετε την ασφάλεια, από προεπιλογή θα μπορείτε να λαμβάνετε απευθείας μηνύματα *από άτομα που παρακολουθείτε*. Μπορείτε να το απενεργοποιήσετε στις ρυθμίσεις του προφίλ σας εάν χρειάζεται, επιλέγοντας το επάνω **banner** και μετά το εικονίδιο **επεξεργασία**.
|
|
@ -0,0 +1,3 @@
|
|||
다이렉트 메시지는 시간순으로 여기에 표시됩니다.
|
||||
|
||||
스팸을 방지하고 보안을 강화하기 위해 기본적으로 *팔로잉하는 사람들*에게서만 다이렉트 메시지를 받을 수 있습니다. 필요한 경우 상단 **배너** 를 선택한 다음 **편집** 아이콘을 선택하여 프로필 설정에서 이 기능을 끌 수 있습니다.
|
|
@ -0,0 +1,3 @@
|
|||
Directe berichten verschijnen hier, als een chronologische tijdlijn.
|
||||
|
||||
Om spam te voorkomen en de veiligheid te verbeteren, kunt u standaard alleen directe berichten ontvangen *van mensen die u volgt*. Je kunt dit desgewenst in je profielinstellingen uitschakelen door de bovenste **banner** en vervolgens het **bewerken**-pictogram te selecteren.
|
|
@ -0,0 +1,3 @@
|
|||
Tutaj pojawią się wiadomości na czacie jako chronologiczna oś czasu.
|
||||
|
||||
Aby uniknąć spamu i poprawić bezpieczeństwo, domyślnie będziesz mógł otrzymywać bezpośrednie wiadomości *od osób, które obserwujesz*. W razie potrzeby możesz to wyłączyć w ustawieniach profilu, wybierając górny **baner**, a następnie ikonę **edytuj**.
|
|
@ -0,0 +1,3 @@
|
|||
Doğrudan mesajlar burada kronolojik bir zaman çizelgesi olarak görünecektir.
|
||||
|
||||
İstenmeyen e-postalardan kaçınmak ve güvenliği artırmak için, varsayılan olarak yalnızca takip ettiğiniz kişilerden * doğrudan mesajlar alabileceksiniz*. Gerekirse, üstteki **başlığı** ve ardından **düzenle** simgesini seçerek bunu profil ayarlarınızdan kapatabilirsiniz.
|
|
@ -0,0 +1,3 @@
|
|||
Прямі повідомлення відображатимуться тут у вигляді хронологічної шкали.
|
||||
|
||||
Щоб уникнути спаму та покращити безпеку, за замовчуванням ви зможете отримувати прямі повідомлення лише від людей, за якими ви підписалися*. Ви можете вимкнути це в налаштуваннях профілю, якщо потрібно, вибравши верхній **банер**, а потім піктограму **редагувати**.
|
|
@ -0,0 +1,3 @@
|
|||
דירעקט אַרטיקלען וועט דערשייַנען דאָ, ווי אַ קראַנאַלאַדזשיקאַל טיימליין.
|
||||
|
||||
צו ויסמיידן ספּאַם און פֿאַרבעסערן זיכערהייט, דורך פעליקייַט איר וועט בלויז קענען צו באַקומען דירעקט אַרטיקלען *פון מענטשן וואָס איר נאָכפאָלגן*. איר קענען קער דאָס אַוועק אין דיין פּראָפיל סעטטינגס אויב איר דאַרפֿן, דורך סעלעקטירן דעם שפּיץ **פאָן** און דערנאָך די **רעדאַגירן* בילדל.
|
|
@ -0,0 +1,19 @@
|
|||
ইনকামিং পোস্ট এখানে প্রদর্শিত হবে, একটি কালানুক্রমিক টাইমলাইন হিসাবে. আপনি যদি কোনো পোস্ট পাঠান তাহলে সেগুলোও এখানে উপস্থিত হবে।
|
||||
|
||||
### শীর্ষ ব্যানার
|
||||
স্ক্রিনের শীর্ষে আপনি আপনার প্রোফাইলে স্যুইচ করতে **ব্যানার** নির্বাচন করতে পারেন এবং এটি সম্পাদনা করতে বা লগ আউট করতে পারেন৷
|
||||
|
||||
### টাইমলাইন বোতাম এবং আইকন
|
||||
উপরের ব্যানারের নিচে **বোতাম** আপনাকে বিভিন্ন সময়রেখা নির্বাচন করতে দেয়। এছাড়াও **অনুসন্ধান** করার, আপনার **ক্যালেন্ডার** দেখার বা **নতুন পোস্ট** তৈরি করার জন্য ডানদিকে **আইকন** রয়েছে৷
|
||||
|
||||
**শো/লুকান** আইকনটি মডারেটর নিয়ন্ত্রণ সহ আরও টাইমলাইন বোতাম দেখানোর অনুমতি দেয়।
|
||||
|
||||
### বাম কলাম
|
||||
এখানে আপনি **উপযোগী লিঙ্ক** যোগ করতে পারেন। এটি শুধুমাত্র ডেস্কটপ ডিসপ্লে বা বড় স্ক্রীন সহ ডিভাইসগুলিতে প্রদর্শিত হয়৷ এটি একটি *blogroll* এর অনুরূপ। আপনার যদি **প্রশাসক** বা **সম্পাদক** ভূমিকা থাকে তবেই আপনি লিঙ্কগুলি যোগ করতে বা সম্পাদনা করতে পারেন৷
|
||||
|
||||
আপনি যদি মোবাইলে থাকেন তাহলে খবর পড়তে উপরে **লিঙ্ক আইকন** ব্যবহার করুন।
|
||||
|
||||
### ডান কলাম
|
||||
RSS ফিডগুলি ডান কলামে যোগ করা যেতে পারে, যা *নিউজওয়্যার* নামে পরিচিত। এটি শুধুমাত্র ডেস্কটপ ডিসপ্লে বা বড় স্ক্রীন সহ ডিভাইসগুলিতে প্রদর্শিত হয়৷ আপনি শুধুমাত্র ফিড যোগ বা সম্পাদনা করতে পারেন যদি আপনার একটি **প্রশাসক** বা **সম্পাদক** ভূমিকা থাকে এবং ইনকামিং ফিড আইটেমগুলিও নিয়ন্ত্রণ করা যেতে পারে।
|
||||
|
||||
আপনি যদি মোবাইলে থাকেন তাহলে খবর পড়তে উপরে **নিউজওয়্যার আইকন** ব্যবহার করুন।
|
|
@ -0,0 +1,19 @@
|
|||
Οι εισερχόμενες αναρτήσεις θα εμφανίζονται εδώ, ως χρονολογικό χρονοδιάγραμμα. Εάν στείλετε οποιεσδήποτε δημοσιεύσεις θα εμφανιστούν επίσης εδώ.
|
||||
|
||||
### Το κορυφαίο πανό
|
||||
Στο επάνω μέρος της οθόνης μπορείτε να επιλέξετε το **banner** για να μεταβείτε στο προφίλ σας και να το επεξεργαστείτε ή να αποσυνδεθείτε.
|
||||
|
||||
### Κουμπιά και εικονίδια γραμμής χρόνου
|
||||
Τα **κουμπιά** κάτω από το επάνω banner σάς επιτρέπουν να επιλέξετε διαφορετικά χρονοδιαγράμματα. Υπάρχουν επίσης **εικονίδια** στα δεξιά για **αναζήτηση**, προβολή **ημερολογίου** ή δημιουργία **νέων αναρτήσεων**.
|
||||
|
||||
Το εικονίδιο **εμφάνιση/απόκρυψη** επιτρέπει την εμφάνιση περισσότερων κουμπιών γραμμής χρόνου, μαζί με τα στοιχεία ελέγχου επόπτη.
|
||||
|
||||
### Αριστερή στήλη
|
||||
Εδώ μπορείτε να προσθέσετε **χρήσιμους συνδέσμους**. Εμφανίζεται μόνο σε επιτραπέζιες οθόνες ή συσκευές με μεγαλύτερες οθόνες. Είναι παρόμοιο με ένα *blogroll*. Μπορείτε να προσθέσετε ή να επεξεργαστείτε συνδέσμους μόνο εάν έχετε ρόλο **διαχειριστή** ή **συντάκτη**.
|
||||
|
||||
Εάν χρησιμοποιείτε κινητό, χρησιμοποιήστε το **εικονίδιο συνδέσμων** στην κορυφή για να διαβάσετε ειδήσεις.
|
||||
|
||||
### Δεξιά στήλη
|
||||
Οι ροές RSS μπορούν να προστεθούν στη δεξιά στήλη, γνωστή ως *newswire*. Εμφανίζεται μόνο σε επιτραπέζιες οθόνες ή συσκευές με μεγαλύτερες οθόνες. Μπορείτε να προσθέσετε ή να επεξεργαστείτε ροές μόνο εάν διαθέτετε ρόλο **διαχειριστή** ή **επεξεργαστή** και τα εισερχόμενα στοιχεία ροής μπορούν επίσης να εποπτεύονται.
|
||||
|
||||
Εάν χρησιμοποιείτε κινητό, χρησιμοποιήστε το **εικονίδιο newswire** στην κορυφή για να διαβάσετε ειδήσεις.
|
|
@ -0,0 +1,19 @@
|
|||
들어오는 게시물은 시간순으로 여기에 표시됩니다. 게시물을 보내면 여기에도 표시됩니다.
|
||||
|
||||
### 상단 배너
|
||||
화면 상단에서 **배너** 를 선택하여 프로필로 전환하고 프로필을 수정하거나 로그아웃할 수 있습니다.
|
||||
|
||||
### Timeline buttons and icons
|
||||
상단 배너 아래의 **버튼** 을 사용하여 다른 타임라인을 선택할 수 있습니다. **검색**, **캘린더** 보기 또는 **새 게시물** 만들기를 수행할 수 있는 **아이콘**도 오른쪽에 있습니다.
|
||||
|
||||
**표시/숨기기** 아이콘을 사용하면 중재자 컨트롤과 함께 더 많은 타임라인 버튼을 표시할 수 있습니다.
|
||||
|
||||
### 왼쪽 열
|
||||
여기에 **유용한 링크** 를 추가할 수 있습니다. 이것은 데스크탑 디스플레이 또는 더 큰 화면이 있는 장치에만 나타납니다. *blogroll* 과 비슷합니다. **관리자** 또는 **편집자** 역할이 있는 경우에만 링크를 추가하거나 편집할 수 있습니다.
|
||||
|
||||
모바일을 사용 중이라면 상단의 **링크 아이콘** 을 사용하여 뉴스를 읽으세요.
|
||||
|
||||
### 오른쪽 열
|
||||
RSS 피드는 *newswire* 로 알려진 오른쪽 열에 추가할 수 있습니다. 이것은 데스크탑 디스플레이 또는 더 큰 화면이 있는 장치에만 나타납니다. **관리자** 또는 **편집자** 역할이 있는 경우에만 피드를 추가하거나 편집할 수 있으며 수신되는 피드 항목도 검토할 수 있습니다.
|
||||
|
||||
모바일을 사용 중이라면 상단의 **newswire 아이콘** 을 사용하여 뉴스를 읽으세요.
|
|
@ -0,0 +1,19 @@
|
|||
Inkomende berichten verschijnen hier, als een chronologische tijdlijn. Als u berichten verzendt, verschijnen deze ook hier.
|
||||
|
||||
### De bovenste banner
|
||||
Bovenaan het scherm kun je de **banner** selecteren om naar je profiel te gaan, dit te bewerken of uit te loggen.
|
||||
|
||||
### Tijdlijnknoppen en pictogrammen
|
||||
Met de **knoppen** onder de bovenste banner kun je verschillende tijdlijnen selecteren. Er zijn ook **pictogrammen** aan de rechterkant om te **zoeken**, uw **agenda** te bekijken of **nieuwe berichten** te maken.
|
||||
|
||||
Met het **toon/verberg**-pictogram kunnen meer tijdlijnknoppen worden weergegeven, samen met moderatorcontroles.
|
||||
|
||||
### Linker kolom
|
||||
Hier kunt u **handige links** toevoegen. Dit verschijnt alleen op desktopschermen of apparaten met grotere schermen. Het is vergelijkbaar met een *blogroll*. Je kunt alleen links toevoegen of bewerken als je de rol van **beheerder** of **editor** hebt.
|
||||
|
||||
Als je mobiel bent, gebruik dan het **links-pictogram** bovenaan om nieuws te lezen.
|
||||
|
||||
### Rechterkolom
|
||||
RSS-feeds kunnen worden toegevoegd in de rechterkolom, ook wel de *newswire* genoemd. Dit verschijnt alleen op desktopschermen of apparaten met grotere schermen. Je kunt alleen feeds toevoegen of bewerken als je de rol van **beheerder** of **editor** hebt, en inkomende feeditems kunnen ook worden gemodereerd.
|
||||
|
||||
Als je mobiel bent, gebruik dan het **newswire-pictogram** bovenaan om nieuws te lezen.
|
|
@ -0,0 +1,19 @@
|
|||
Przychodzące posty pojawią się tutaj jako chronologiczna oś czasu. Jeśli wyślesz jakieś posty, pojawią się one również tutaj.
|
||||
|
||||
### Górny baner
|
||||
U góry ekranu możesz wybrać **baner**, aby przejść do swojego profilu, edytować go lub się wylogować.
|
||||
|
||||
### Przyciski i ikony osi czasu
|
||||
**Przyciski** pod górnym banerem umożliwiają wybór różnych osi czasu. Po prawej stronie znajdują się również **ikony** umożliwiające **wyszukiwanie**, przeglądanie **kalendarza** lub tworzenie **nowych postów**.
|
||||
|
||||
Ikona **pokaż/ukryj** umożliwia wyświetlenie większej liczby przycisków osi czasu wraz z elementami sterującymi moderatora.
|
||||
|
||||
### Lewa kolumna
|
||||
Tutaj możesz dodać **przydatne linki**. Pojawia się tylko na wyświetlaczach stacjonarnych lub urządzeniach z większymi ekranami. Jest podobny do *bloga*. Możesz dodawać i edytować linki tylko wtedy, gdy masz rolę **administrator** lub **edytor**.
|
||||
|
||||
Jeśli korzystasz z telefonu komórkowego, użyj **ikony linków** u góry, aby przeczytać wiadomości.
|
||||
|
||||
### Prawa kolumna
|
||||
Kanały RSS można dodawać w prawej kolumnie, zwanej *newswire*. Pojawia się tylko na wyświetlaczach stacjonarnych lub urządzeniach z większymi ekranami. Możesz dodawać i edytować kanały tylko wtedy, gdy masz rolę **administratora** lub **edytora**, a przychodzące elementy kanału też mogą być moderowane.
|
||||
|
||||
Jeśli korzystasz z telefonu komórkowego, użyj **ikony newswire** u góry, aby przeczytać wiadomości.
|
|
@ -0,0 +1,19 @@
|
|||
Gelen gönderiler burada kronolojik bir zaman çizelgesi olarak görünecektir. Herhangi bir gönderi gönderirseniz, burada da görünürler.
|
||||
|
||||
### Üst afiş
|
||||
Ekranın üst kısmındaki **başlığı** seçerek profilinize geçiş yapabilir, profilinizi düzenleyebilir veya oturumu kapatabilirsiniz.
|
||||
|
||||
### Zaman çizelgesi düğmeleri ve simgeler
|
||||
Üst başlığın altındaki **düğmeler**, farklı zaman çizelgeleri seçmenize olanak tanır. Ayrıca sağ tarafta **arama**, **takviminizi** görüntüleme veya **yeni gönderiler** oluşturma için **simgeler** vardır.
|
||||
|
||||
**Göster/gizle** simgesi, moderatör kontrolleriyle birlikte daha fazla zaman çizelgesi düğmesinin gösterilmesini sağlar.
|
||||
|
||||
### Sol sütun
|
||||
Buraya **faydalı bağlantılar** ekleyebilirsiniz. Bu, yalnızca masaüstü ekranlarında veya daha büyük ekranlı cihazlarda görünür. Bir *blogroll*'a benzer. Yalnızca bir **yönetici** veya **düzenleyici** rolünüz varsa bağlantı ekleyebilir veya düzenleyebilirsiniz.
|
||||
|
||||
Mobildeyseniz haberleri okumak için üst kısımdaki **bağlantılar simgesini** kullanın.
|
||||
|
||||
### Sağ sütun
|
||||
RSS beslemeleri, *haber teli* olarak bilinen sağ sütuna eklenebilir. Bu, yalnızca masaüstü ekranlarında veya daha büyük ekranlı cihazlarda görünür. Yalnızca bir **yönetici** veya **düzenleyici** rolünüz varsa yayın ekleyebilir veya yayınları düzenleyebilirsiniz ve gelen yayın öğeleri de denetlenebilir.
|
||||
|
||||
Mobildeyseniz haberleri okumak için üst kısımdaki **haber teli simgesini** kullanın.
|
|
@ -0,0 +1,19 @@
|
|||
Вхідні дописи відображатимуться тут у вигляді хронологічної шкали. Якщо ви надішлете будь-які дописи, вони також з’являться тут.
|
||||
|
||||
### Верхній банер
|
||||
У верхній частині екрана ви можете вибрати **банер**, щоб перейти до свого профілю, відредагувати його або вийти.
|
||||
|
||||
### Кнопки та значки шкали часу
|
||||
**Кнопки** під верхнім банером дозволяють вибрати різні часові шкали. Також є **значки** праворуч для **пошуку**, перегляду **календаря** або створення **нових дописів**.
|
||||
|
||||
Значок **показати/приховати** дозволяє відображати більше кнопок шкали часу, а також елементи керування модератором.
|
||||
|
||||
### Ліва колона
|
||||
Тут ви можете додати **корисні посилання**. Це відображається лише на настільних дисплеях або пристроях з великими екранами. Це схоже на *блогрол*. Ви можете додавати або редагувати посилання, лише якщо у вас є роль **адміністратора** або **редактора**.
|
||||
|
||||
Якщо ви користуєтеся мобільним телефоном, використовуйте **значок посилань** угорі, щоб читати новини.
|
||||
|
||||
### Права колонка
|
||||
RSS-канали можна додавати в праву колонку, відому як *новинна стрічка*. Це відображається лише на настільних дисплеях або пристроях з великими екранами. Ви можете додавати або редагувати канали, лише якщо у вас є роль **адміністратора** або **редактора**, а вхідні елементи каналу також можна модерувати.
|
||||
|
||||
Якщо ви користуєтеся мобільним телефоном, скористайтеся **значком новин** угорі, щоб читати новини.
|
|
@ -0,0 +1,19 @@
|
|||
ינקאַמינג אַרטיקלען וועט דערשייַנען דאָ, ווי אַ קראַנאַלאַדזשיקאַל טיימליין. אויב איר שיקן אַרטיקלען, זיי וועלן אויך דערשייַנען דאָ.
|
||||
|
||||
### די שפּיץ פאָן
|
||||
אין די שפּיץ פון די פאַרשטעלן איר קענען אויסקלייַבן דעם ** פאָן ** צו באַשטימען צו דיין פּראָפיל, און רעדאַגירן עס אָדער קלאָץ אויס.
|
||||
|
||||
### טיימליין קנעפּלעך און ייקאַנז
|
||||
די קנעפּלעך אונטער די שפּיץ פאָן לאָזן איר צו אויסקלייַבן פאַרשידענע טיימליינז. עס זענען אויך ייקאַנז אויף די רעכט צו זוכן, זען דיין קאַלענדאַר אָדער שאַפֿן נייַע אַרטיקלען.
|
||||
|
||||
די **ווייַזן / באַהאַלטן** ייקאַן אַלאַוז מער טיימליין קנעפּלעך צו זיין געוויזן, צוזאַמען מיט מאָדעראַטאָר קאָנטראָלס.
|
||||
|
||||
### לינקס זייַל
|
||||
דאָ איר קענען לייגן **נוציק לינקס**. דאָס איז בלויז אויף דעסקטאַפּ דיספּלייז אָדער דעוויסעס מיט גרעסערע סקרינז. עס איז ענלעך צו אַ *בלאָגראָלל*. איר קענט בלויז לייגן אָדער רעדאַגירן לינקס אויב איר האָבן אַן **אַדמיניסטראַטאָר** אָדער **רעדאַקטאָר** ראָלע.
|
||||
|
||||
אויב איר זענט אויף רירעוודיק, נוצן די **לינקס בילדל** אין דער שפּיץ צו לייענען נייַעס.
|
||||
|
||||
### רעכט זייַל
|
||||
RSS פידז קענען זיין מוסיף אין די רעכט זייַל, באקאנט ווי די *newswire*. דאָס איז בלויז אויף דעסקטאַפּ דיספּלייז אָדער דעוויסעס מיט גרעסערע סקרינז. איר קענט בלויז לייגן אָדער רעדאַגירן פידז אויב איר האָבן אַן **אַדמיניסטראַטאָר** אָדער **רעדאַקטאָר** ראָלע, און ינקאַמינג פיטער זאכן קענען אויך זיין מאַדערייטיד.
|
||||
|
||||
אויב איר זענט אויף רירעוודיק, נוצן די **נעווסווירע ייקאַן** אין דער שפּיץ צו לייענען נייַעס.
|
|
@ -0,0 +1 @@
|
|||
আপনার পাঠানো পোস্টগুলি একটি কালানুক্রমিক টাইমলাইন হিসাবে এখানে উপস্থিত হবে৷
|
|
@ -0,0 +1 @@
|
|||
Οι αποσταλμένες αναρτήσεις σας θα εμφανίζονται εδώ, ως χρονολογική γραμμή χρόνου.
|
|
@ -0,0 +1 @@
|
|||
보낸 게시물은 시간순으로 여기에 표시됩니다.
|
|
@ -0,0 +1 @@
|
|||
Je verzonden berichten verschijnen hier als een chronologische tijdlijn.
|
|
@ -0,0 +1 @@
|
|||
Twoje wysłane posty pojawią się tutaj jako chronologiczna oś czasu.
|
|
@ -0,0 +1 @@
|
|||
Gönderilen gönderileriniz burada kronolojik bir zaman çizelgesi olarak görünecektir.
|
|
@ -0,0 +1 @@
|
|||
Ваші надіслані повідомлення відображатимуться тут у хронологічній хронології.
|
|
@ -0,0 +1 @@
|
|||
דיין געשיקט אַרטיקלען וועט דערשייַנען דאָ, ווי אַ קראַנאַלאַדזשיקאַל טיימליין.
|
|
@ -0,0 +1,5 @@
|
|||
এই টাইমলাইনে আপনার বা আপনি অনুসরণ করছেন এমন কারোর লেখা যেকোনো ব্লগ রয়েছে৷
|
||||
|
||||
আপনি ডান কলামের উপরে **প্রকাশ** আইকন ব্যবহার করে একটি নতুন ব্লগ পোস্ট তৈরি করতে পারেন।
|
||||
|
||||
ব্লগ পোস্ট সাধারণ ফেডিভার্স পোস্ট থেকে ভিন্ন. তারা ActivityPub *Article* টাইপ ব্যবহার করে, যেটি দীর্ঘ-ফর্ম লেখার উদ্দেশ্যে। তারা নিউজওয়্যার আইটেম থেকে নির্বাচিত উদ্ধৃতি থাকতে পারে.
|
|
@ -0,0 +1,5 @@
|
|||
Αυτό το χρονοδιάγραμμα περιέχει τυχόν ιστολόγια γραμμένα από εσάς ή οποιονδήποτε ακολουθείτε.
|
||||
|
||||
Μπορείτε να δημιουργήσετε μια νέα ανάρτηση ιστολογίου χρησιμοποιώντας το εικονίδιο **δημοσίευση** στο επάνω μέρος της δεξιάς στήλης.
|
||||
|
||||
Οι αναρτήσεις ιστολογίου διαφέρουν από τις συνηθισμένες αναρτήσεις σε ό,τι αφορά τις αναρτήσεις. Χρησιμοποιούν τον τύπο ActivityPub *Article*, ο οποίος προορίζεται για μεγάλη γραφή. Μπορούν επίσης να έχουν παραπομπές, επιλεγμένες από στοιχεία στο newswire.
|
|
@ -0,0 +1,5 @@
|
|||
이 타임라인에는 귀하 또는 귀하가 팔로우하는 모든 사람이 작성한 블로그가 포함됩니다.
|
||||
|
||||
오른쪽 열 상단의 **게시** 아이콘을 사용하여 새 블로그 게시물을 만들 수 있습니다.
|
||||
|
||||
블로그 포스트는 일반 페디버스 포스트와 다릅니다. 그들은 긴 형식의 쓰기를 위한 ActivityPub *Article* 유형을 사용합니다. 그들은 또한 뉴스와이어의 항목에서 선택된 인용을 가질 수 있습니다.
|
|
@ -0,0 +1,5 @@
|
|||
Deze tijdlijn bevat alle blogs die zijn geschreven door jou of iemand die je volgt.
|
||||
|
||||
U kunt een nieuwe blogpost maken met behulp van het **publiceren**-pictogram bovenaan de rechterkolom.
|
||||
|
||||
Blogberichten zijn anders dan gewone fediverse berichten. Ze gebruiken het ActivityPub *Article*-type, dat bedoeld is om in lange vorm te schrijven. Ze kunnen ook citaten hebben, geselecteerd uit items in de nieuwsdraad.
|
|
@ -0,0 +1,5 @@
|
|||
Ta oś czasu zawiera wszystkie blogi napisane przez Ciebie lub osoby, które obserwujesz.
|
||||
|
||||
Możesz utworzyć nowy post na blogu, używając ikony **opublikuj** u góry prawej kolumny.
|
||||
|
||||
Posty na blogu różnią się od zwykłych postów fediverse. Używają typu ActivityPub *Artykuł*, który jest przeznaczony do pisania długich form. Mogą również mieć cytaty wybrane z artykułów w newswire.
|
|
@ -0,0 +1,5 @@
|
|||
Bu zaman çizelgesi, sizin veya takip ettiğiniz herhangi birinin yazdığı tüm blogları içerir.
|
||||
|
||||
Sağ sütunun üst kısmındaki **yayınla** simgesini kullanarak yeni bir blog yazısı oluşturabilirsiniz.
|
||||
|
||||
Blog gönderileri, sıradan federasyon gönderilerinden farklıdır. Uzun biçimli yazmaya yönelik ActivityPub *Makale* türünü kullanırlar. Ayrıca haber telindeki öğelerden seçilen alıntılara da sahip olabilirler.
|
|
@ -0,0 +1,5 @@
|
|||
Ця часова шкала містить усі блоги, написані вами або кимось, на кого ви читаєте.
|
||||
|
||||
Ви можете створити нову публікацію в блозі за допомогою значка **опублікувати** у верхній частині правого стовпця.
|
||||
|
||||
Дописи в блозі відрізняються від звичайних дописів fediverse. Вони використовують тип ActivityPub *Article*, який призначений для довгого письма. Вони також можуть мати цитати, вибрані з елементів у ленті новин.
|
|
@ -0,0 +1,5 @@
|
|||
די טיימליין כּולל בלאָגס געשריבן דורך איר אָדער ווער עס יז וואָס איר נאָכפאָלגן.
|
||||
|
||||
איר קענט שאַפֿן אַ נייַע בלאָג פּאָסטן מיט די **אַרויסגעבן** בילדל אין די שפּיץ פון די רעכט זייַל.
|
||||
|
||||
בלאָג אַרטיקלען זענען אַנדערש פון פּראָסט פעדיווערס אַרטיקלען. זיי נוצן די ActivityPub *אַרטיקל* טיפּ, וואָס איז בדעה פֿאַר לאַנג-פאָרעם שרייבן. זיי קענען אויך האָבן סייטיישאַנז, אויסגעקליבן פון זאכן אין די נייַעס.
|
|
@ -0,0 +1 @@
|
|||
যেকোনো বুকমার্ক করা পোস্ট এখানে উপস্থিত হয়।
|
|
@ -0,0 +1 @@
|
|||
Τυχόν αναρτήσεις με σελιδοδείκτες εμφανίζονται εδώ.
|
|
@ -0,0 +1 @@
|
|||
북마크된 모든 게시물이 여기에 표시됩니다.
|
|
@ -0,0 +1 @@
|
|||
Berichten met een bladwijzer verschijnen hier.
|
|
@ -0,0 +1 @@
|
|||
Tutaj pojawiają się wszystkie posty dodane do zakładek.
|
|
@ -0,0 +1 @@
|
|||
Yer imi eklenmiş tüm gönderiler burada görünür.
|
|
@ -0,0 +1 @@
|
|||
Тут відображаються будь-які дописи з закладками.
|
|
@ -0,0 +1 @@
|
|||
קיין באָאָקמאַרקעד אַרטיקלען דערשייַנען דאָ.
|
|
@ -0,0 +1 @@
|
|||
যেকোনো ইনকামিং পোস্ট যেখানে **ছবি**, **ভিডিও** বা **অডিও** ফাইল রয়েছে সেগুলোর বিবরণ সহ এখানে উপস্থিত হবে।
|