4.0 KiB
Deploying Capsul on a server
Capsul has a "hub and spoke" architecture. The "Hub" runs the web application and talks to the Postrges database, while the "Spoke"s are responsible for creating/managing virtual machines. One instance of the capsul-flask application can run in both hub mode and spoke mode at the same time, however there must only be one instance of the app running in "Hub" mode at any given time.
Installing prerequisites for Spoke Mode
On your spoke (see Architecture You'll need libvirtd
, dnsmasq
, and qemu-kvm
, plus a /tank
diectory with some operating system images in it:
sudo apt install libvirt-daemon-system virtinst git dnsmasq qemu qemu-kvm
sudo mkdir -p /var/www /tank/{vm,img,config}
sudo mkdir -p /tank/img/debian/10
cd !$
sudo wget https://cloud.debian.org/images/cloud/buster/20201023-432/debian-10-genericcloud-amd64-20201023-432.qcow2 -O root.img.qcow2
TODO: network set-up
cyberia-cloudinit.yml
The create vm shell script depends on this file. I think right now its file path is hard-coded to /tank/config/cyberia-cloudinit.yml
#cloud-config
preserve_hostname: true
users:
- name: cyberian
groups: wheel
lock_passwd: true
shell: /bin/bash
sudo: ['ALL=(ALL) NOPASSWD:ALL']
ssh-authorized_keys:
autonomic / servers.coop version for ipv6 support
#cloud-config
package_upgrade: true
packages:
- curl
- wget
- git
- vim
bootcmd:
- echo 'iface ens3 inet6 dhcp' >> /etc/network/interfaces.d/50-cloud-init
- ifdown ens3; ifup ens3
final_message: "The system is finally up, after $UPTIME seconds"
users:
- name: user
groups: sudo
sudo: ['ALL=(ALL) NOPASSWD:ALL']
ssh-authorized-keys:
Deploying capsul-flask
Manually
Follow the local set-up instructions on your server.
Make sure to set BASE_URL
correctly, generate your own secret tokens, and
configure your own daemon management for the capsul-flask server (e.g. writing
init scripts, or SystemD unit files).
Use the suggested gunicorn
command (with appropriately-set address and port),
instead of flask run
, to launch the server.
For example, here is the SystemD service unit file we use in production for capsul.org
:
[Unit]
Description=capsul-flask virtual machines as a service
After=network.target
[Service]
ExecStart=/usr/local/bin/pipenv run gunicorn --bind 127.0.0.1:5000 -k gevent --worker-connections 1000 app:app
Restart=on-failure
WorkingDirectory=/opt/capsul-flask
[Install]
WantedBy=multi-user.target
TODO: cron runner is required to run maintenance tasks for now, but in the future we want to build this into the python based task scheduler.
Using Co-op Cloud's vanilla Docker Swarm configuration
Download the Co-op Cloud swarm compose.yml
:
wget https://git.autonomic.zone/coop-cloud/capsul/src/branch/main/compose.yml
Optionally, download add-on compose files for Stripe, BTCPay, and Spoke Mode:
wget https://git.autonomic.zone/coop-cloud/capsul/src/branch/main/compose.{stripe,btcpay,spoke}.yml
Then, create a .env
file and configure appropriately -- you probably want to
define most settings in the Co-op Cloud .envrc.sample
file.
Load the environment variables (using Python direnv
, or a manual set -a && source .env && set +a
), insert any necessary secrets, then run the deployment:
docker stack deploy -c compose.yml -c compose.stripe.yml your_capsul
(where you'd add an extra -c compose.btcpay.yml
for each optional compose file
you want, and set your_capsul
to the "stack name" you want).
TODO: cron runner
Using Co-op Cloud's abra
deployment tool
Follow the guide in the README for the Co-op Cloud capsul package.
Using docker-compose
TODO