Fix bug in default period calc, update readme.

This commit is contained in:
Joe Ardent 2024-03-29 08:59:14 -07:00
parent f476d6a117
commit 4917fefbec
2 changed files with 42 additions and 12 deletions

View File

@ -10,21 +10,46 @@ SQL checking macros, so your DB needs to have the tables to allow the SQL checks
## How does it work?
If you wish to use this on your webpage, you need to add a couple lines. First, a "tracking pixel"
that registers the current page's view:
You need to register the hit by doing a GET to the `/hit/:page` endpoint, where `:page` is a unique
and persistent identifier for the page; on my blog, I'm using the Zola post slug as the id. This bit
of HTML + JS shows it in action:
``` html
<img src=http://localhost:5000/hit style="visibility:hidden">
<p>There have been <span id="allhits">no</span> views of this page</p>
<script defer>
const hits = document.getElementById('allhits');
fetch('http://localhost:5000/hit/index.html').then((resp) => {
return resp.text();
}).then((data) => {
hits.innerHTML = data;
});
</script>
```
Then, some javascript to retrieve the hit counts:
In this example, the `:page` is "index.html". The `/hit` endpoint registers the hit and then
returns back the latest count of hits.
``` html
help I don't know how to javascript, we need to get http://localhost:5000/hits/all (or 'day' or
'week'), and that will return the number of recorded hits for this page for that time period.
```
The `index.html` file in this repo has the above code in it; if you serve it like
`python3 -m http.server 3000 & cargo run`
then visit http://localhost:3000 you should see that there is 1 hit, if this is the first time
you're trying it out. Reloading won't increment the count until the hour changes and you visit
again, or you kill and restart Hitman.
It uses the `referer` header to get the page it was called from, and uses that as the key for
counting the hit.
### Privacy?
The IP from the request is hashed with the date, hour of day, `:page`, and a random 64-bit number
that gets regenerated every time the service is restarted and is never disclosed. This does two
things:
1. ensures that hit counts are limited to one per hour per IP per page;
2. ensures that you can't enumerate all possible hashes from just the page, time, and then just
trying all four billion possible IPs to find the matching hash.
There is no need to put up a tracking consent form because nothing is being tracked.
### Security?
Well, you need to give it a specific origin that is allowed to connect. Is this enough?

View File

@ -67,7 +67,7 @@ async fn main() {
}
//-************************************************************************
// the two route hanlders
// the three route hanlders
//-************************************************************************
/// This is the main handler. It counts the hit and returns the latest count.
@ -126,7 +126,10 @@ async fn get_period_hits(
let then = now - chrono::Duration::try_days(7).unwrap();
then.to_rfc3339()
}
_ => "all".to_string(),
_ => {
let then = now - chrono::Duration::try_days(365_240).unwrap(); // 1000 years
then.to_rfc3339()
}
};
let hits = sqlx::query_scalar!(
@ -141,6 +144,8 @@ async fn get_period_hits(
format!("{hits}")
}
// it's easier to split this into something that handles the request parameters
// and something that does the query
async fn get_all_hits(State(db): State<SqlitePool>, Path(slug): Path<String>) -> String {
let hits = all_hits_helper(&db, &slug).await;
format!("{hits}")