capsul-flask/README.md

64 lines
1.9 KiB
Markdown
Raw Normal View History

# capsulflask
Python Flask web application for capsul.org
## how to run locally
Ensure you have the pre-requisites for the psycopg2 Postgres database adapter package
```
2020-05-12 02:25:49 +00:00
sudo apt install python3-dev libpq-dev
pg_config --version
```
2020-05-12 02:25:49 +00:00
Ensure you have the wonderful `pipenv` python package management and virtual environment cli
```
sudo apt install pipenv
```
Create python virtual environment and install packages
```
2020-05-12 02:25:49 +00:00
# install deps
pipenv install
# load the deps into $PATH
pipenv shell
```
2020-05-10 01:36:14 +00:00
Run an instance of Postgres (I used docker for this, you can use whatever you want, point is its listening on localhost:5432)
```
2020-05-10 18:51:54 +00:00
docker run --rm -it -e POSTGRES_PASSWORD=dev -p 5432:5432 postgres
```
2020-05-12 01:15:11 +00:00
Modify the default email settings
```
nano capsulflask/__init__.py
```
Run the app
```
FLASK_APP=capsulflask flask run
2020-05-10 18:51:54 +00:00
```
Run the app in gunicorn locally
```
pip install gunicorn
.venv/bin/gunicorn --bind 127.0.0.1:5000 capsulflask:app
```
2020-05-10 18:51:54 +00:00
## postgres database schema management
capsulflask has a concept of a schema version. When the application starts, it will query the database for a table named
`schemaversion` that has one row and one column (`version`). If the `version` it finds is not equal to the `desiredSchemaVersion` variable set in `db.py`, it will run migration scripts from the `schema_migrations` folder one by one until the `schemaversion` table shows the correct version.
For example, the script named `02_up_xyz.sql` should contain code that migrates the database from schema version 1 to schema version 2. Likewise, the script `02_down_xyz.sql` should contain code that migrates from schema version 2 back to schema version 1.
**IMPORTANT: if you need to make changes to the schema, make a NEW schema version. DO NOT EDIT the existing schema versions.**
2020-05-12 01:15:11 +00:00
In general, for safety, schema version upgrades should not delete data. Schema version downgrades will simply throw an error and exit for now.