From 9bb53e471429ac0890b7d7ec14a63d05d3002af1 Mon Sep 17 00:00:00 2001 From: Brady McDonough Date: Sun, 2 Feb 2025 18:33:19 -0700 Subject: [PATCH] More Ramble --- README.md | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/README.md b/README.md index 621c91e..a28b692 100644 --- a/README.md +++ b/README.md @@ -7,13 +7,21 @@ If you want to run this all yourself the project depends on: * [Guile][gui] * [Artanis][art] * [Elm][elm] +* [nginx][ngi] It uses autoconf and make and can be deployed locally by running: ``` ./configure; make; make install; make up ``` +Don't forget to restart nginx after invoking the up or down target! # Notes +There's a lot that could be said of how I chose to calculate risk and unfortunately that's part of the project that was never completed. First of all the idea of calculating risk of exposure is at least somewhat novel but there are major hurdles that keep such calculations from being accurate and it should be noted that exposure isn't the same as infection. Based on this data I was almost certainly exposed to Covid-19 seeing hundreds of people a day at the height of the pandemic, perhaps I was also infected at some time but I never ended up with symptoms. + +There are reasons to think that this data both overestimates and underestimates the real risk of exposure. For starters, the reporting mechanism was inconsistent, relying on a mix of reporters, some of whom never work weekends and who would variably attribute data reported on Saturday and Sunday as being either on the day it was recorded or lumped into Monday's numbers. This fact forced me to rely on looking at 7-day increments and rely on that averaging to smooth out the spike of catch-up reports appearing on Mondays. To varying degrees there were also constant reports of both under and over reporting. There is no way for the data to capture or account for cases in a pre-symptomatic stage nor false testing outcomes. Behavior of an infected individual would also likely be greatly modified, by all accounts Covid quite reliably put you out of commission and it's unlikely someone with an active case would be out and about beyond a certain point. + +With all these challenges that I don't have the means to account for I decided to make it anyways and rely on the fact that the mix of both positive and negative biases would average out to an estimate which was good enough for at least my purposes. + ## The backend Scheme in general has a number of features as a language that make it quite good at processing structured data. Parsers are quite easy to build in Lisp-like languages and there are entire languages (Racket) devoted to that fact. S-expressions are the primary building block of all Scheme languages, both syntax and data are just a series of S-expressions and when parsing structured data it all gets converted into a tree of S-expressions, commonly called sxml. There exist very strong means of manipulating these sxml trees, namely ```pre-post-order``` which can walk through a tree and dynamically change between depth-first and breadth-first walk orders depending on the node. In this program, data is read using a modified [csv parser][csv] and output as sxml, processed using ```pre-post-order``` to remove data from the tree we aren't interested in and label each group of data with ```hr``` before the tree is evalutated as though it's code. The ```hr``` symbol is a syntax-rule and so when the evaluation is run each Health Region's data is re-written to code associating that data with a variable by the Guile interpreter. The same basic process is used on a list associating postal codes with health regions. @@ -27,3 +35,4 @@ Elm is a very interesting language, it's a Haskell that compiles to Javascript. [gui]:https://www.gnu.org/software/guile [elm]:https://elm-lang.org [csv]:https://gitlab.com/bradymcd/guile-csv +[ngi]:https://nginx.org/