Split README up into separate files, plus:
* forest's ReadMe docs changes * add Configuration-type-stuff that lives in the database
254
README.md
@ -1,227 +1,51 @@
|
|||||||
# capsulflask
|
# capsul-flask
|
||||||
|
|
||||||
Python Flask web application for capsul.org
|
![screenshot of capsul.org home page](./docs/capsul.webp)
|
||||||
|
|
||||||
|
Python Flask web application implementing user accounts, payment, and virtual machine management for a smol "virtual machine (vm) as a service" aka "cloud compute" provider. Originally developed by [Cyberia Computer Club](https://cyberia.club) for https://capsul.org
|
||||||
|
|
||||||
## how to run locally
|
`capsul-flask` integrates with [Stripe](https://stripe.com/) as a credit card processor, and [BTCPay Server](https://github.com/btcpayserver/btcpayserver-docker) as a cryptocurrency payment processor.
|
||||||
|
|
||||||
Ensure you have the pre-requisites for the psycopg2 Postgres database adapter package
|
`capsul-flask` invokes [shell-scripts](./capsulflask/shell_scripts/) to create/manage [libvirt/qemu](https://www.libvirt.org/manpages/virsh.html) vms, and it depends on `dnsmasq` to act as the DHCP server for the vms.
|
||||||
|
|
||||||
|
`capsul-flask` has a ["hub and spoke" architecture](./architecture.md). The "Hub" runs the web application and talks to the Postrges database, while the "Spoke"(s) are responsible for creating/managing virtual machines. In this way, capsul can be scaled to span more than one machine. One instance of the capsul-flask application can run in both hub mode and spoke mode at the same time, however there must only be one instance of the app running in "Hub" mode at any given time.
|
||||||
|
|
||||||
|
## Quickstart (run capsul-flask on your computer in development mode)
|
||||||
|
|
||||||
```
|
```
|
||||||
sudo apt install python3-dev libpq-dev
|
# get an instance of postgres running locally on port 5432
|
||||||
pg_config --version
|
# (you don't have to use docker, but we thought this might be the easiest for a how-to example)
|
||||||
```
|
docker run --rm -it -e POSTGRES_PASSWORD=dev -p 5432:5432 postgres &
|
||||||
|
|
||||||
Ensure you have the wonderful `pipenv` python package management and virtual environment cli
|
# install dependencies
|
||||||
|
sudo apt install pipenv python3-dev libpq-dev
|
||||||
|
|
||||||
```
|
# download and run
|
||||||
sudo apt install pipenv
|
git clone https://giit.cyberia.club/~forest/capsul-flask
|
||||||
```
|
cd capsul-flask
|
||||||
|
|
||||||
Create python virtual environment and install packages
|
|
||||||
|
|
||||||
```
|
|
||||||
# install deps
|
|
||||||
pipenv install
|
pipenv install
|
||||||
```
|
|
||||||
|
|
||||||
Run an instance of Postgres (I used docker for this, you can use whatever you want, point is its listening on localhost:5432)
|
|
||||||
|
|
||||||
```
|
|
||||||
docker run --rm -it -e POSTGRES_PASSWORD=dev -p 5432:5432 postgres
|
|
||||||
```
|
|
||||||
|
|
||||||
Run the app
|
|
||||||
|
|
||||||
```
|
|
||||||
pipenv run flask run
|
pipenv run flask run
|
||||||
```
|
```
|
||||||
|
|
||||||
Run the app in gunicorn:
|
Interested in learning more? How about a trip to the the `docs/` folder:
|
||||||
|
|
||||||
```
|
- [**Setting up capsul-flask locally**](./docs/local-set-up.md)
|
||||||
pipenv run gunicorn --bind 127.0.0.1:5000 -k gevent --worker-connections 1000 app:app
|
- [Manually](./docs/local-set-up.md#manually)
|
||||||
```
|
- [With docker-compose](./docs/local-set-up.md#docker_compose)
|
||||||
|
- [**Configuring `capsul-flask`**](./docs/configuration.md)
|
||||||
Once you log in for the first time, you will want to give yourself some free capsulbux so you can create fake capsuls for testing.
|
- [Example configuration from capsul.org (production)](./docs/configuration.md#example)
|
||||||
|
- [Configuration-type-stuff that lives in the database ](./docs/configuration.md#config_that_lives_in_db)
|
||||||
Note that by default when running locally, the `SPOKE_MODEL` is set to `mock`, meaning that it won't actually try to spawn vms.
|
- [Loading variables from files (docker secrets)](./docs/configuration.md#docker_secrets)
|
||||||
|
- [**`capsul-flask`'s relationship to its Database Server**](./docs/database.md)
|
||||||
```
|
- [Database schema management (schema versions)](./docs/database.md#schema_management)
|
||||||
pipenv run flask cli sql -c "INSERT INTO payments (email, dollars) VALUES ('<your email address here>', 20.00)"
|
- [Running manual database queries](./docs/database.md#manual_queries)
|
||||||
```
|
- [**`capsul-flask`'s hub-and-spoke architecture**](./docs/architecture.md)
|
||||||
|
- [**Deploying capsul-flask on a server**](./docs/deployment.md)
|
||||||
## configuration:
|
- [Installing prerequisites for Spoke Mode](./docs/deployment.md#spoke_mode_prerequisites)
|
||||||
|
- [Deploying capsul-flask manually](./docs/deployment.md#deploy_manually)
|
||||||
Create a `.env` file to set up the application configuration:
|
- [Deploying capsul-flask with coop-cloud's docker-swarm configuration](./docs/deployment.md#coop_cloud_docker)
|
||||||
|
- [Deploying capsul-flask with coop-cloud's `abra` deployment tool](./docs/deployment.md#coop_cloud_abra)
|
||||||
```
|
- [**Accepting cryptocurrency payments with BTCPay Server**](./docs/btcpay.md)
|
||||||
nano .env
|
- [Setting up the BTCPAY_PRIVATE_KEY](./docs/btcpay.md#BTCPAY_PRIVATE_KEY)
|
||||||
```
|
- [Testing cryptocurrency payments](./docs/btcpay.md#testing)
|
||||||
|
- [Sequence diagram explaining how BTC payment process works (how we accept 0-confirmation transactions 😀)](./docs/btcpay.md#0_conf_diagram)
|
||||||
You can enter any environment variables referenced in `__init__.py` to this file.
|
|
||||||
|
|
||||||
For example you may enter your SMTP credentials like this:
|
|
||||||
```
|
|
||||||
MAIL_USERNAME=forest@nullhex.com
|
|
||||||
MAIL_DEFAULT_SENDER=forest@nullhex.com
|
|
||||||
MAIL_PASSWORD=**************
|
|
||||||
```
|
|
||||||
|
|
||||||
## how to view the logs on the database server (legion.cyberia.club)
|
|
||||||
|
|
||||||
`sudo -u postgres pg_dump capsul-flask | gzip -9 > capsul-backup-2021-02-15.gz`
|
|
||||||
|
|
||||||
-----
|
|
||||||
|
|
||||||
## cli
|
|
||||||
|
|
||||||
You can manually mess around with the database like this:
|
|
||||||
|
|
||||||
```
|
|
||||||
pipenv run flask cli sql -f test.sql
|
|
||||||
```
|
|
||||||
|
|
||||||
```
|
|
||||||
pipenv run flask cli sql -c 'SELECT * FROM vms'
|
|
||||||
```
|
|
||||||
|
|
||||||
This one selects the vms table with the column name header:
|
|
||||||
|
|
||||||
```
|
|
||||||
pipenv run flask cli sql -c "SELECT string_agg(column_name::text, ', ') from information_schema.columns WHERE table_name='vms'; SELECT * from vms"
|
|
||||||
```
|
|
||||||
|
|
||||||
How to modify a payment manually, like if you get a chargeback or to fix customer payment issues:
|
|
||||||
|
|
||||||
```
|
|
||||||
$ pipenv run flask cli sql -c "SELECT id, created, email, dollars, invalidated from payments"
|
|
||||||
1, 2020-05-05T00:00:00, forest.n.johnson@gmail.com, 20.00, FALSE
|
|
||||||
|
|
||||||
$ pipenv run flask cli sql -c "UPDATE payments SET invalidated = True WHERE id = 1"
|
|
||||||
1 rows affected.
|
|
||||||
|
|
||||||
$ pipenv run flask cli sql -c "SELECT id, created, email, dollars, invalidated from payments"
|
|
||||||
1, 2020-05-05T00:00:00, forest.n.johnson@gmail.com, 20.00, TRUE
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
How you would kick off the scheduled task:
|
|
||||||
|
|
||||||
```
|
|
||||||
pipenv run flask cli cron-task
|
|
||||||
```
|
|
||||||
|
|
||||||
-----
|
|
||||||
|
|
||||||
## postgres database schema management
|
|
||||||
|
|
||||||
capsulflask has a concept of a schema version. When the application starts, it will query the database for a table named
|
|
||||||
`schemaversion` that has one row and one column (`version`). If the `version` it finds is not equal to the `desiredSchemaVersion` variable set in `db.py`, it will run migration scripts from the `schema_migrations` folder one by one until the `schemaversion` table shows the correct version.
|
|
||||||
|
|
||||||
For example, the script named `02_up_xyz.sql` should contain code that migrates the database from schema version 1 to schema version 2. Likewise, the script `02_down_xyz.sql` should contain code that migrates from schema version 2 back to schema version 1.
|
|
||||||
|
|
||||||
**IMPORTANT: if you need to make changes to the schema, make a NEW schema version. DO NOT EDIT the existing schema versions.**
|
|
||||||
|
|
||||||
In general, for safety, schema version upgrades should not delete data. Schema version downgrades will simply throw an error and exit for now.
|
|
||||||
|
|
||||||
-----
|
|
||||||
|
|
||||||
## hub-and-spoke architecture
|
|
||||||
|
|
||||||
![](readme/hub-and-spoke1.png)
|
|
||||||
|
|
||||||
This diagram was created with https://app.diagrams.net/.
|
|
||||||
To edit it, download the <a download href="readme/hub-and-spoke.xml">diagram file</a> and edit it with the https://app.diagrams.net/ web application, or you may run the application from [source](https://github.com/jgraph/drawio) if you wish.
|
|
||||||
|
|
||||||
right now I have 2 types of operations, immediate mode and async.
|
|
||||||
|
|
||||||
both types of operations do assignment synchronously. so if the system cant assign the operation to one or more hosts (spokes),
|
|
||||||
or whatever the operation requires, then it will fail.
|
|
||||||
|
|
||||||
some operations tolerate partial failures, like, `capacity_avaliable` will succeed if at least one spoke succeeds.
|
|
||||||
for immediate mode requests (like `list`, `capacity_avaliable`, `destroy`), assignment and completion of the operation are the same thing.
|
|
||||||
|
|
||||||
for async ones, they can be assigned without knowing whether or not they succeeded (`create`).
|
|
||||||
|
|
||||||
![](readme/hub-and-spoke2.png)
|
|
||||||
|
|
||||||
This diagram was created with https://app.diagrams.net/.
|
|
||||||
To edit it, download the <a download href="readme/hub-and-spoke.xml">diagram file</a> and edit it with the https://app.diagrams.net/ web application, or you may run the application from [source](https://github.com/jgraph/drawio) if you wish.
|
|
||||||
|
|
||||||
if you issue a create, and it technically could go to any number of hosts, but only one host responds, it will succeed
|
|
||||||
but if you issue a create and somehow 2 hosts both think they own that task, it will fail and throw a big error. cuz it expects exactly 1 to own the create task
|
|
||||||
|
|
||||||
currently its not set up to do any polling. its not really like a queue at all. It's all immediate for the most part
|
|
||||||
|
|
||||||
-----
|
|
||||||
|
|
||||||
## how to setup btcpay server
|
|
||||||
|
|
||||||
Generate a private key and the accompanying bitpay SIN for the btcpay API client.
|
|
||||||
|
|
||||||
I used this code as an example: https://github.com/bitpay/bitpay-python/blob/master/bitpay/key_utils.py#L6
|
|
||||||
|
|
||||||
```
|
|
||||||
$ pipenv run python ./readme/generate_btcpay_keys.py
|
|
||||||
```
|
|
||||||
|
|
||||||
It should output something looking like this:
|
|
||||||
|
|
||||||
```
|
|
||||||
-----BEGIN EC PRIVATE KEY-----
|
|
||||||
EXAMPLEIArx/EXAMPLEKH23EXAMPLEsYXEXAMPLE5qdEXAMPLEcFHoAcEXAMPLEK
|
|
||||||
oUQDQgAEnWs47PT8+ihhzyvXX6/yYMAWWODluRTR2Ix6ZY7Z+MV7v0W1maJzqeqq
|
|
||||||
NQ+cpBvPDbyrDk9+Uf/sEaRCma094g==
|
|
||||||
-----END EC PRIVATE KEY-----
|
|
||||||
|
|
||||||
|
|
||||||
EXAMPLEwzAEXAMPLEEXAMPLEURD7EXAMPLE
|
|
||||||
```
|
|
||||||
|
|
||||||
In order to register the key with the btcpay server, you have to first generate a pairing token using the btcpay server interface.
|
|
||||||
This requires your btcpay server account to have access to the capsul store. Ask Cass about this.
|
|
||||||
|
|
||||||
Navigate to `Manage store: Access Tokens` at: `https://btcpay.cyberia.club/stores/<store-id>/Tokens`
|
|
||||||
|
|
||||||
![](readme/btcpay_sin_pairing.jpg)
|
|
||||||
|
|
||||||
|
|
||||||
Finally, send an http request to the btcpay server to complete the pairing:
|
|
||||||
|
|
||||||
```
|
|
||||||
curl -H "Content-Type: application/json" https://btcpay.cyberia.club/tokens -d "{'id': 'EXAMPLEwzAEXAMPLEEXAMPLEURD7EXAMPLE', 'pairingCode': 'XXXXXXX'}"
|
|
||||||
```
|
|
||||||
|
|
||||||
It should respond with a token:
|
|
||||||
|
|
||||||
```
|
|
||||||
{"data":[{"policies":[],"pairingCode":"XXXXXXX","pairingExpiration":1589473817597,"dateCreated":1589472917597,"facade":"merchant","token":"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","label":"capsulflask"}]}
|
|
||||||
```
|
|
||||||
|
|
||||||
And you should see the token in the btcpay server UI:
|
|
||||||
|
|
||||||
![](readme/paired.jpg)
|
|
||||||
|
|
||||||
Now simply set your `BTCPAY_PRIVATE_KEY` variable in `.env`
|
|
||||||
|
|
||||||
NOTE: make sure to use single quotes and replace the new lines with \n.
|
|
||||||
|
|
||||||
```
|
|
||||||
BTCPAY_PRIVATE_KEY='-----BEGIN EC PRIVATE KEY-----\nEXAMPLEIArx/EXAMPLEKH23EXAMPLEsYXEXAMPLE5qdEXAMPLEcFHoAcEXAMPLEK\noUQDQgAEnWs47PT8+ihhzyvXX6/yYMAWWODluRTR2Ix6ZY7Z+MV7v0W1maJzqeqq\nNQ+cpBvPDbyrDk9+Uf/sEaRCma094g==\n-----END EC PRIVATE KEY-----'
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
-----
|
|
||||||
|
|
||||||
## testing cryptocurrency payments
|
|
||||||
|
|
||||||
I used litecoin to test cryptocurrency payments, because its the simplest & lowest fee cryptocurrency that BTCPay server supports. You can download the easy-to-use litecoin SPV wallet `electrum-ltc` from [github.com/pooler/electrum-ltc](https://github.com/pooler/electrum-ltc) or [electrum-ltc.org](https://electrum-ltc.org/), set up a wallet, and then either purchase some litecoin from an exchange, or ask Forest for some litecoin to use for testing.
|
|
||||||
|
|
||||||
|
|
||||||
## sequence diagram explaining how BTC payment process works
|
|
||||||
|
|
||||||
![btcpayment_process](readme/btcpayment_process.png)
|
|
||||||
|
|
||||||
This diagram was created with https://app.diagrams.net/.
|
|
||||||
To edit it, download the <a download href="readme/btcpayment_process.drawio">diagram file</a> and edit it with the https://app.diagrams.net/ web application, or you may run the application from [source](https://github.com/jgraph/drawio) if you wish.
|
|
||||||
|
30
docs/architecture.md
Normal file
@ -0,0 +1,30 @@
|
|||||||
|
# hub-and-spoke architecture
|
||||||
|
|
||||||
|
The "Hub" runs the web application and talks to the Postrges database, while the "Spoke"s are responsible for creating/managing virtual machines. One instance of the capsul-flask application can run in hub mode and spoke mode at the same time.
|
||||||
|
|
||||||
|
The Hub and the Spoke must be configured to communicate securely with each-other over HTTPS. They both have to be able to dial each-other directly. The URLs / auth tokens they use are configured both in the config file (`HUB_URL`, `SPOKE_HOST_ID`, `SPOKE_HOST_TOKEN` and `HUB_TOKEN`) and in the database (the `id`, `https_url`, and `token` columns in the `hosts` table).
|
||||||
|
|
||||||
|
![](images/hub-and-spoke1.png)
|
||||||
|
|
||||||
|
This diagram was created with https://app.diagrams.net/.
|
||||||
|
To edit it, download the <a download href="readme/hub-and-spoke.xml">diagram file</a> and edit it with the https://app.diagrams.net/ web application, or you may run the application from [source](https://github.com/jgraph/drawio) if you wish.
|
||||||
|
|
||||||
|
right now I have 2 types of operations, immediate mode and async.
|
||||||
|
|
||||||
|
both types of operations do assignment synchronously. so if the system cant assign the operation to one or more hosts (spokes),
|
||||||
|
or whatever the operation requires, then it will fail.
|
||||||
|
|
||||||
|
some operations tolerate partial failures, like, `capacity_avaliable` will succeed if at least one spoke succeeds.
|
||||||
|
for immediate mode requests (like `list`, `capacity_avaliable`, `destroy`), assignment and completion of the operation are the same thing.
|
||||||
|
|
||||||
|
for async ones, they can be assigned without knowing whether or not they succeeded (`create`).
|
||||||
|
|
||||||
|
![](images/hub-and-spoke2.png)
|
||||||
|
|
||||||
|
This diagram was created with https://app.diagrams.net/.
|
||||||
|
To edit it, download the <a download href="readme/hub-and-spoke.xml">diagram file</a> and edit it with the https://app.diagrams.net/ web application, or you may run the application from [source](https://github.com/jgraph/drawio) if you wish.
|
||||||
|
|
||||||
|
if you issue a create, and it technically could go to any number of hosts, but only one host responds, it will succeed
|
||||||
|
but if you issue a create and somehow 2 hosts both think they own that task, it will fail and throw a big error. cuz it expects exactly 1 to own the create task
|
||||||
|
|
||||||
|
currently its not set up to do any polling. its not really like a queue at all. It's all immediate for the most part
|
68
docs/btcpay.md
Normal file
@ -0,0 +1,68 @@
|
|||||||
|
## <a name="BTCPAY_PRIVATE_KEY"></a>Setting up the BTCPAY_PRIVATE_KEY
|
||||||
|
|
||||||
|
Generate a private key and the accompanying bitpay SIN for the btcpay API client.
|
||||||
|
|
||||||
|
I used this code as an example: https://github.com/bitpay/bitpay-python/blob/master/bitpay/key_utils.py#L6
|
||||||
|
|
||||||
|
```
|
||||||
|
$ pipenv run python ./readme/generate_btcpay_keys.py
|
||||||
|
```
|
||||||
|
|
||||||
|
It should output something looking like this:
|
||||||
|
|
||||||
|
```
|
||||||
|
-----BEGIN EC PRIVATE KEY-----
|
||||||
|
EXAMPLEIArx/EXAMPLEKH23EXAMPLEsYXEXAMPLE5qdEXAMPLEcFHoAcEXAMPLEK
|
||||||
|
oUQDQgAEnWs47PT8+ihhzyvXX6/yYMAWWODluRTR2Ix6ZY7Z+MV7v0W1maJzqeqq
|
||||||
|
NQ+cpBvPDbyrDk9+Uf/sEaRCma094g==
|
||||||
|
-----END EC PRIVATE KEY-----
|
||||||
|
|
||||||
|
|
||||||
|
EXAMPLEwzAEXAMPLEEXAMPLEURD7EXAMPLE
|
||||||
|
```
|
||||||
|
|
||||||
|
In order to register the key with the btcpay server, you have to first generate a pairing token using the btcpay server interface.
|
||||||
|
This requires your btcpay server account to have access to the capsul store. Ask Cass about this.
|
||||||
|
|
||||||
|
Navigate to `Manage store: Access Tokens` at: `https://btcpay.cyberia.club/stores/<store-id>/Tokens`
|
||||||
|
|
||||||
|
![](images/btcpay_sin_pairing.jpg)
|
||||||
|
|
||||||
|
|
||||||
|
Finally, send an http request to the btcpay server to complete the pairing:
|
||||||
|
|
||||||
|
```
|
||||||
|
curl -H "Content-Type: application/json" https://btcpay.cyberia.club/tokens -d "{'id': 'EXAMPLEwzAEXAMPLEEXAMPLEURD7EXAMPLE', 'pairingCode': 'XXXXXXX'}"
|
||||||
|
```
|
||||||
|
|
||||||
|
It should respond with a token:
|
||||||
|
|
||||||
|
```
|
||||||
|
{"data":[{"policies":[],"pairingCode":"XXXXXXX","pairingExpiration":1589473817597,"dateCreated":1589472917597,"facade":"merchant","token":"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","label":"capsulflask"}]}
|
||||||
|
```
|
||||||
|
|
||||||
|
And you should see the token in the btcpay server UI:
|
||||||
|
|
||||||
|
![](images/paired.jpg)
|
||||||
|
|
||||||
|
Now simply set your `BTCPAY_PRIVATE_KEY` variable in `.env`
|
||||||
|
|
||||||
|
NOTE: make sure to use single quotes and replace the new lines with \n.
|
||||||
|
|
||||||
|
```
|
||||||
|
BTCPAY_PRIVATE_KEY='-----BEGIN EC PRIVATE KEY-----\nEXAMPLEIArx/EXAMPLEKH23EXAMPLEsYXEXAMPLE5qdEXAMPLEcFHoAcEXAMPLEK\noUQDQgAEnWs47PT8+ihhzyvXX6/yYMAWWODluRTR2Ix6ZY7Z+MV7v0W1maJzqeqq\nNQ+cpBvPDbyrDk9+Uf/sEaRCma094g==\n-----END EC PRIVATE KEY-----'
|
||||||
|
```
|
||||||
|
|
||||||
|
-----
|
||||||
|
|
||||||
|
## <a name="testing"></a>testing cryptocurrency payments
|
||||||
|
|
||||||
|
I used litecoin to test cryptocurrency payments, because its the simplest & lowest fee cryptocurrency that BTCPay server supports. You can download the easy-to-use litecoin SPV wallet `electrum-ltc` from [github.com/pooler/electrum-ltc](https://github.com/pooler/electrum-ltc) or [electrum-ltc.org](https://electrum-ltc.org/), set up a wallet, and then either purchase some litecoin from an exchange, or [ask Forest for some litecoin](https://sequentialread.com/capsul-rollin-onwards-with-a-web-application/#sqr-comment-container) to use for testing.
|
||||||
|
|
||||||
|
|
||||||
|
## <a name="0_conf_diagram"></a>sequence diagram explaining how BTC payment process works (how we accept 0-confirmation transactions 😀)
|
||||||
|
|
||||||
|
![btcpayment_process](images/btcpayment_process.png)
|
||||||
|
|
||||||
|
This diagram was created with https://app.diagrams.net/.
|
||||||
|
To edit it, download the <a download href="readme/btcpayment_process.drawio">diagram file</a> and edit it with the https://app.diagrams.net/ web application, or you may run the application from [source](https://github.com/jgraph/drawio) if you wish.
|
BIN
docs/capsul.webp
Normal file
After Width: | Height: | Size: 10 KiB |
89
docs/configuration.md
Normal file
@ -0,0 +1,89 @@
|
|||||||
|
# Configuring Capsul-Flask
|
||||||
|
|
||||||
|
Create a `.env` file to set up the application configuration:
|
||||||
|
|
||||||
|
```
|
||||||
|
nano .env
|
||||||
|
```
|
||||||
|
|
||||||
|
You can enter any environment variables referenced in [`__init__.py`](../capsulflask/__init__.py) to this file.
|
||||||
|
|
||||||
|
For example you may enter your SMTP credentials like this:
|
||||||
|
```
|
||||||
|
MAIL_USERNAME=forest@nullhex.com
|
||||||
|
MAIL_DEFAULT_SENDER=forest@nullhex.com
|
||||||
|
MAIL_PASSWORD=**************
|
||||||
|
```
|
||||||
|
|
||||||
|
## <a name="example"></a>Example configuration from capsul.org (production):
|
||||||
|
|
||||||
|
```
|
||||||
|
#LOG_LEVEL=DEBUG
|
||||||
|
|
||||||
|
BASE_URL="https://capsul.org"
|
||||||
|
|
||||||
|
# hub url is used by the SPOKE_MODE to contact the hub. Since this server is the hub,
|
||||||
|
# this is fine. In fact it runs into problems (routing related?) when I set it to capsul.org.
|
||||||
|
# similarly the baikal "spoke" (set up in the hosts table in the db) has "http://localhost:5000" as the https_url
|
||||||
|
HUB_URL="http://localhost:5000"
|
||||||
|
|
||||||
|
HUB_MODE_ENABLED="t"
|
||||||
|
SPOKE_MODE_ENABLED="t"
|
||||||
|
HUB_MODEL="capsul-flask"
|
||||||
|
SPOKE_MODEL="shell-scripts"
|
||||||
|
SPOKE_HOST_ID="baikal"
|
||||||
|
SPOKE_HOST_TOKEN="<redacted>"
|
||||||
|
HUB_TOKEN="<redacted>"
|
||||||
|
|
||||||
|
# smtp.. see https://flask-mail.readthedocs.io/en/latest/#configuring-flask-mail
|
||||||
|
MAIL_SERVER="smtp.nullhex.com"
|
||||||
|
|
||||||
|
# MAIL_USE_SSL means SMTP with STARTTLS
|
||||||
|
MAIL_USE_SSL=true
|
||||||
|
|
||||||
|
# MAIL_USE_TLS means SMTP wrapped in TLS
|
||||||
|
MAIL_USE_TLS=false
|
||||||
|
|
||||||
|
MAIL_PORT="465"
|
||||||
|
MAIL_USERNAME="capsul@nullhex.com"
|
||||||
|
MAIL_PASSWORD="<redacted>"
|
||||||
|
MAIL_DEFAULT_SENDER="capsul@nullhex.com"
|
||||||
|
|
||||||
|
# stripe
|
||||||
|
STRIPE_SECRET_KEY="sk_live_<redacted>"
|
||||||
|
STRIPE_PUBLISHABLE_KEY="pk_live_tGDHY7kBwqC71b4F0N7LZdGl00GZOw0iNJ"
|
||||||
|
|
||||||
|
# internal
|
||||||
|
SECRET_KEY="<redacted>"
|
||||||
|
POSTGRES_CONNECTION_PARAMETERS="sslmode=verify-full sslrootcert=letsencrypt-root-ca.crt host=postgres.cyberia.club port=5432 ...<redacted>"
|
||||||
|
|
||||||
|
|
||||||
|
# btcpay server
|
||||||
|
BTCPAY_URL="https://beeteeceepae2.cyberia.club"
|
||||||
|
BTCPAY_PRIVATE_KEY='-----BEGIN EC PRIVATE KEY-----\n<redacted>\n-----END EC PRIVATE KEY-----'
|
||||||
|
```
|
||||||
|
|
||||||
|
## <a name="config_that_lives_in_db"></a>Configuration-type-stuff that lives in the database
|
||||||
|
|
||||||
|
- `hosts` table:
|
||||||
|
- `id` (corresponds to `SPOKE_HOST_ID` in the config)
|
||||||
|
- `https_url`
|
||||||
|
- `token` (corresponds to `SPOKE_HOST_TOKEN` in the config)
|
||||||
|
- `os_images` table:
|
||||||
|
- `id`
|
||||||
|
- `template_image_file_name`
|
||||||
|
- `description`
|
||||||
|
- `deprecated`
|
||||||
|
- `vm_sizes` table:
|
||||||
|
- `id`
|
||||||
|
- `dollars_per_month`
|
||||||
|
- `memory_mb`
|
||||||
|
- `vcpus`
|
||||||
|
- `bandwidth_gb_per_month`
|
||||||
|
|
||||||
|
## <a name="docker_secrets"></a>Loading variables from files (docker secrets)
|
||||||
|
|
||||||
|
To support [Docker Secrets](https://docs.docker.com/engine/swarm/secrets/), you can also load secret values from files – for example, to load `MAIL_PASSWORD` from `/run/secrets/mail_password`, set
|
||||||
|
```sh
|
||||||
|
MAIL_PASSWORD_FILE=/run/secrets/mail_password
|
||||||
|
```
|
50
docs/database.md
Normal file
@ -0,0 +1,50 @@
|
|||||||
|
# capsul-flask's relationship to its Database Server
|
||||||
|
|
||||||
|
Capsul has a ["hub and spoke" architecture](./architecture.md). The "Hub" runs the web application and talks to the Postrges database, while the "Spoke"s are responsible for creating/managing virtual machines. One instance of the capsul-flask application can run in both hub mode and spoke mode at the same time, however there must only be one instance of the app running in "Hub" mode at any given time.
|
||||||
|
|
||||||
|
The Postgres connections parameters are [configurable](./configuration.md).
|
||||||
|
|
||||||
|
## <a name="schema_management"></a>Database schema management (schema versions)
|
||||||
|
|
||||||
|
capsul-flask has a concept of a schema version. When the application starts, it will query the database for a table named `schemaversion` that has one row and one column (`version`). If the `version` it finds is not equal to the `desiredSchemaVersion` variable set in `db.py`, it will run migration scripts from the `schema_migrations` folder one by one until the `schemaversion` table shows the correct version.
|
||||||
|
|
||||||
|
For example, the script named `02_up_xyz.sql` should contain code that migrates the database from schema version 1 to schema version 2. Likewise, the script `02_down_xyz.sql` should contain code that migrates from schema version 2 back to schema version 1.
|
||||||
|
|
||||||
|
**IMPORTANT: if you need to make changes to the schema, make a NEW schema version. DO NOT EDIT the existing schema versions.**
|
||||||
|
|
||||||
|
In general, for safety, schema version upgrades should not delete data. Schema version downgrades will simply throw an error and exit for now.
|
||||||
|
|
||||||
|
## <a name="manual_queries"></a>Running manual database queries
|
||||||
|
|
||||||
|
You can manually mess around with the database like this:
|
||||||
|
|
||||||
|
```
|
||||||
|
pipenv run flask cli sql -f test.sql
|
||||||
|
```
|
||||||
|
|
||||||
|
```
|
||||||
|
pipenv run flask cli sql -c 'SELECT * FROM vms'
|
||||||
|
```
|
||||||
|
|
||||||
|
This one selects the vms table with the column name header:
|
||||||
|
|
||||||
|
```
|
||||||
|
pipenv run flask cli sql -c "SELECT string_agg(column_name::text, ', ') from information_schema.columns WHERE table_name='vms'; SELECT * from vms"
|
||||||
|
```
|
||||||
|
|
||||||
|
How to modify a payment manually, like if you get a chargeback or to fix customer payment issues:
|
||||||
|
|
||||||
|
```
|
||||||
|
$ pipenv run flask cli sql -c "SELECT id, created, email, dollars, invalidated from payments"
|
||||||
|
1, 2020-05-05T00:00:00, forest.n.johnson@gmail.com, 20.00, FALSE
|
||||||
|
|
||||||
|
$ pipenv run flask cli sql -c "UPDATE payments SET invalidated = True WHERE id = 1"
|
||||||
|
1 rows affected.
|
||||||
|
|
||||||
|
$ pipenv run flask cli sql -c "SELECT id, created, email, dollars, invalidated from payments"
|
||||||
|
1, 2020-05-05T00:00:00, forest.n.johnson@gmail.com, 20.00, TRUE
|
||||||
|
```
|
||||||
|
|
||||||
|
## how to view the logs on the database server (legion.cyberia.club)
|
||||||
|
|
||||||
|
`sudo -u postgres pg_dump capsul-flask | gzip -9 > capsul-backup-2021-02-15.gz`
|
87
docs/deployment.md
Normal file
@ -0,0 +1,87 @@
|
|||||||
|
# Deploying Capsul on a server
|
||||||
|
|
||||||
|
Capsul has a ["hub and spoke" architecture](./architecture.md). The "Hub" runs the web application and talks to the Postrges database, while the "Spoke"s are responsible for creating/managing virtual machines. One instance of the capsul-flask application can run in both hub mode and spoke mode at the same time, however there must only be one instance of the app running in "Hub" mode at any given time.
|
||||||
|
|
||||||
|
## <a name="spoke_mode_prerequisites"></a>Installing prerequisites for Spoke Mode
|
||||||
|
|
||||||
|
On your spoke (see [Architecture](./architecture.md) You'll need `libvirtd`, `dnsmasq`, and `qemu-kvm`, plus a `/tank` diectory with some operating system images in it:
|
||||||
|
|
||||||
|
```
|
||||||
|
sudo apt install libvirt-daemon-system virtinst git dnsmasq qemu qemu-kvm
|
||||||
|
sudo mkdir -p /var/www /tank/{vm,img,config}
|
||||||
|
sudo mkdir -p /tank/img/debian/10
|
||||||
|
cd !$
|
||||||
|
sudo wget https://cloud.debian.org/images/cloud/buster/20201023-432/debian-10-genericcloud-amd64-20201023-432.qcow2 -O root.img.qcow2
|
||||||
|
```
|
||||||
|
|
||||||
|
TODO: network set-up
|
||||||
|
TODO: cyberia-cloudinit.yml
|
||||||
|
|
||||||
|
## Deploying capsul-flask
|
||||||
|
|
||||||
|
### <a name="deploy_manually"></a>Manually
|
||||||
|
|
||||||
|
Follow the [local set-up instructions](./local-set-up.md) on your server.
|
||||||
|
|
||||||
|
Make sure to set `BASE_URL` correctly, generate your own secret tokens, and
|
||||||
|
configure your own daemon management for the capsul-flask server (e.g. writing
|
||||||
|
init scripts, or SystemD unit files).
|
||||||
|
|
||||||
|
Use the suggested `gunicorn` command (with appropriately-set address and port),
|
||||||
|
instead of `flask run`, to launch the server.
|
||||||
|
|
||||||
|
For example, here is the SystemD service unit file we use in production for `capsul.org`:
|
||||||
|
|
||||||
|
```
|
||||||
|
[Unit]
|
||||||
|
Description=capsul-flask virtual machines as a service
|
||||||
|
After=network.target
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
ExecStart=/usr/local/bin/pipenv run gunicorn --bind 127.0.0.1:5000 -k gevent --worker-connections 1000 app:app
|
||||||
|
Restart=on-failure
|
||||||
|
WorkingDirectory=/opt/capsul-flask
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=multi-user.target
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
TODO: cron runner is required to run maintenance tasks for now, but in the future we want to build this into the python based task scheduler.
|
||||||
|
|
||||||
|
### <a name="coop_cloud_docker"></a> Using Co-op Cloud's vanilla Docker Swarm configuration
|
||||||
|
|
||||||
|
Download the Co-op Cloud swarm `compose.yml`:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
wget https://git.autonomic.zone/coop-cloud/capsul/src/branch/main/compose.yml
|
||||||
|
```
|
||||||
|
|
||||||
|
Optionally, download add-on compose files for Stripe, BTCPay, and Spoke Mode:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
wget https://git.autonomic.zone/coop-cloud/capsul/src/branch/main/compose.{stripe,btcpay,spoke}.yml
|
||||||
|
```
|
||||||
|
|
||||||
|
Then, create a `.env` file and configure appropriately -- you probably want to
|
||||||
|
define most settings in [the Co-op Cloud `.envrc.sample`
|
||||||
|
file](https://git.autonomic.zone/coop-cloud/capsul/src/branch/main/.envrc.sample).
|
||||||
|
|
||||||
|
Load the environment variables (using Python `direnv`, or a manual `set -a && source .env && set +a`), insert any necessary secrets, then run the deployment:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
docker stack deploy -c compose.yml -c compose.stripe.yml your_capsul
|
||||||
|
```
|
||||||
|
|
||||||
|
(where you'd add an extra `-c compose.btcpay.yml` for each optional compose file
|
||||||
|
you want, and set `your_capsul` to the "stack name" you want).
|
||||||
|
|
||||||
|
TODO: cron runner
|
||||||
|
|
||||||
|
### <a name="coop_cloud_abra"></a> Using Co-op Cloud's `abra` deployment tool
|
||||||
|
|
||||||
|
Follow [the guide in the README for the Co-op Cloud capsul package](https://git.autonomic.zone/coop-cloud/capsul/).
|
||||||
|
|
||||||
|
### Using docker-compose
|
||||||
|
|
||||||
|
TODO
|
Before Width: | Height: | Size: 35 KiB After Width: | Height: | Size: 35 KiB |
Before Width: | Height: | Size: 190 KiB After Width: | Height: | Size: 190 KiB |
Before Width: | Height: | Size: 41 KiB After Width: | Height: | Size: 41 KiB |
Before Width: | Height: | Size: 49 KiB After Width: | Height: | Size: 49 KiB |
Before Width: | Height: | Size: 22 KiB After Width: | Height: | Size: 22 KiB |
68
docs/local-set-up.md
Normal file
@ -0,0 +1,68 @@
|
|||||||
|
# How to run Capsul locally
|
||||||
|
|
||||||
|
## <a name="manually"></a>Manually
|
||||||
|
|
||||||
|
Ensure you have the pre-requisites for the psycopg2 Postgres database adapter package:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
sudo apt install python3-dev libpq-dev
|
||||||
|
pg_config --version
|
||||||
|
```
|
||||||
|
|
||||||
|
Ensure you have the wonderful `pipenv` python package management and virtual environment cli:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
sudo apt install pipenv
|
||||||
|
```
|
||||||
|
|
||||||
|
Create python virtual environment and install packages:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
pipenv install
|
||||||
|
```
|
||||||
|
|
||||||
|
Run an instance of Postgres (I used docker for this, you can use whatever you want, point is its listening on `localhost:5432`):
|
||||||
|
|
||||||
|
```sh
|
||||||
|
docker run --rm -it -e POSTGRES_PASSWORD=dev -p 5432:5432 postgres
|
||||||
|
```
|
||||||
|
|
||||||
|
Run the app
|
||||||
|
|
||||||
|
```sh
|
||||||
|
pipenv run flask run
|
||||||
|
```
|
||||||
|
|
||||||
|
or, using Gunicorn:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
pipenv run gunicorn --bind 127.0.0.1:5000 -k gevent --worker-connections 1000 app:app
|
||||||
|
```
|
||||||
|
|
||||||
|
Note that by default when running locally, the `SPOKE_MODEL` is set to `mock`, meaning that it won't actually try to spawn vms.
|
||||||
|
|
||||||
|
## Crediting your account
|
||||||
|
|
||||||
|
Once you log in for the first time, you will want to give yourself some free capsulbux so you can create fake capsuls for testing.
|
||||||
|
|
||||||
|
```sh
|
||||||
|
pipenv run flask cli sql -c "INSERT INTO payments (email, dollars) VALUES ('<your email address here>', 20.00)"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Running scheduled tasks:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
pipenv run flask cli cron-task
|
||||||
|
```
|
||||||
|
|
||||||
|
## <a name="docker_compose"></a>Run locally with docker-compose
|
||||||
|
|
||||||
|
If you have Docker and Docker-Compose installed, you can use the
|
||||||
|
`3wordchant/capsul-flask` Docker image to launch capsul-flask, and a Postgres
|
||||||
|
database server, for you:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
docker-compose up
|
||||||
|
```
|
||||||
|
|
||||||
|
`capsul-flask` will read settings from your `.env` file as usual; you can set any of the options mentioned in the [configuration documentation](./configuration.md).
|