Merge branch 'serialization-post'
1
.gitignore
vendored
|
@ -1,2 +1 @@
|
||||||
*.jpg
|
|
||||||
public/
|
public/
|
||||||
|
|
|
@ -1,4 +1,5 @@
|
||||||
name = "From the Office of the Chief Sundries Officer and Head of R&D"
|
name = "Office of the Chief Sundries Officer and Head of R&D"
|
||||||
|
title = "From the Desk of the Chief Sundries Officer and Head of R&D, NebCorp Heavy Industries and Sundries"
|
||||||
|
|
||||||
# The URL the site will be built for
|
# The URL the site will be built for
|
||||||
base_url = "https://proclamations.nebcorp-hias.com"
|
base_url = "https://proclamations.nebcorp-hias.com"
|
||||||
|
@ -16,6 +17,8 @@ taxonomies = [
|
||||||
]
|
]
|
||||||
|
|
||||||
theme = "apollo"
|
theme = "apollo"
|
||||||
|
mathjax = true
|
||||||
|
mathjax_dollar_inline_enable = true
|
||||||
|
|
||||||
[markdown]
|
[markdown]
|
||||||
# Whether to do syntax highlighting
|
# Whether to do syntax highlighting
|
||||||
|
|
|
@ -1,8 +1,9 @@
|
||||||
+++
|
+++
|
||||||
title = "From the Desk of the Head of R&D and Chief Sundries Officer"
|
title = "Latest Proclamations"
|
||||||
sort_by = "date"
|
sort_by = "date"
|
||||||
generate_feed = true
|
generate_feed = true
|
||||||
toc = true
|
toc = true
|
||||||
|
template = "home.html"
|
||||||
[extra]
|
[extra]
|
||||||
toc = true
|
toc = true
|
||||||
+++
|
+++
|
||||||
|
|
535
content/rnd/a_serialized_mystery/index.md
Normal file
|
@ -0,0 +1,535 @@
|
||||||
|
+++
|
||||||
|
title = "A One-Part Serialized Mystery"
|
||||||
|
slug = "one-part-serialized-mystery"
|
||||||
|
date = "2023-06-29"
|
||||||
|
updated = "2023-06-29"
|
||||||
|
[taxonomies]
|
||||||
|
tags = ["software", "rnd", "proclamation", "upscm", "rust"]
|
||||||
|
+++
|
||||||
|
|
||||||
|
# *Mise en Scene*
|
||||||
|
|
||||||
|
I recently spent a couple days moving from [one type of universally unique
|
||||||
|
identifier](https://commons.apache.org/sandbox/commons-id/uuid.html) to a [different
|
||||||
|
one](https://github.com/ulid/spec), for an in-progress [database-backed
|
||||||
|
web-app](https://gitlab.com/nebkor/ww). The [initial
|
||||||
|
work](https://gitlab.com/nebkor/ww/-/commit/be96100237da56313a583be6da3dc27a4371e29d#f69082f7433f159d627269b207abdaf2ad52b24c)
|
||||||
|
didn't take very long, but debugging the [serialization and
|
||||||
|
deserialization](https://en.wikipedia.org/wiki/Serialization) of the new IDs took another day and a
|
||||||
|
half, and in the end, the alleged mystery of why it wasn't working was a red herring due to my own
|
||||||
|
stupidity. So come with me on an exciting voyage of discovery, and [once again, learn from my
|
||||||
|
folly](@/sundries/a-thoroughly-digital-artifact/index.md)!
|
||||||
|
|
||||||
|
# Keys, primarily
|
||||||
|
|
||||||
|
Most large distributed programs that people interact with daily via HTTP are, in essence, a fancy
|
||||||
|
facade for some kind of database. Facebook? That's a database. Gmail? That's a database.
|
||||||
|
|
||||||
|
![that's a database][thats_a_database]
|
||||||
|
<div class="caption">wikipedia? that's a database.</div>
|
||||||
|
|
||||||
|
In most databases, each entry ("row") has a field that acts as a [primary
|
||||||
|
key](https://en.wikipedia.org/wiki/Primary_key), used to uniquely identify that row inside the table
|
||||||
|
it's in. Since databases typically contain multiple tables, and primary keys have to be unique only
|
||||||
|
within their own table, you could just use a simple integer that's automatically incremented every
|
||||||
|
time you add a new record, and in many databases, if you create a table without specifying a primary
|
||||||
|
key, they will [automatically and implicitly use a
|
||||||
|
mechanism](https://www.sqlite.org/lang_createtable.html#rowid) like that. You may also recognize the
|
||||||
|
idea of "serial numbers", which is what these sorts of IDs are.
|
||||||
|
|
||||||
|
This is often totally fine! If you only ever have one copy of the database, and never have to worry
|
||||||
|
about inserting rows from a different instance of the database, then you can just use those simple
|
||||||
|
values and move on your merry way.
|
||||||
|
|
||||||
|
However, if you ever think you might want to have multiple instances of your database running, and
|
||||||
|
want to make sure they're eventually consistent with each other, then you might want to use a
|
||||||
|
fancier identifier for your primary keys, to avoid collisions between them.
|
||||||
|
|
||||||
|
## UUIDs
|
||||||
|
|
||||||
|
A popular type for fancy keys is called a
|
||||||
|
[v4 UUIDs](https://datatracker.ietf.org/doc/html/rfc4122#page-14). These are 128-bit random
|
||||||
|
numbers[^uuidv4_random], and when turned into a string, usually look something like
|
||||||
|
`1c20104f-e04f-409e-9ad3-94455e5f4fea`; this is called the "hyphenated" form, for fairly obvious
|
||||||
|
reasons. Although sometimes they're stored in a DB in that form directly, that's using 36 bytes to
|
||||||
|
store 16 bytes' worth of data, which is more than twice as many bytes as necessary. And if you're
|
||||||
|
a programmer, this sort of conspicuous waste is unconscionable.
|
||||||
|
|
||||||
|
You can cut that to 32 bytes by just dropping the dashes, but then that's still twice as many bytes
|
||||||
|
as the actual data requires. If you never have to actually display the ID inside the database, then
|
||||||
|
the simplest thing to do is just store it as a blob of 16 bytes[^blob-of-bytes]. Finally, optimal
|
||||||
|
representation and efficiency!
|
||||||
|
|
||||||
|
## Indexes?
|
||||||
|
|
||||||
|
And at first, that's what I did. The [external library](https://docs.rs/sqlx/latest/sqlx/) I'm using
|
||||||
|
to interface with my database automatically writes UUIDs as a sequence of sixteen bytes, if you
|
||||||
|
specified the type in the database[^sqlite-dataclasses] as "[blob](https://www.sqlite.org/datatype3.html)", which [I
|
||||||
|
did](https://gitlab.com/nebkor/ww/-/commit/65a32f1f20df6c572580d796e1044bce807fd3b6#f1043d50a0244c34e4d056fe96659145d03b549b_0_5).
|
||||||
|
|
||||||
|
But then I saw a [blog post](https://shopify.engineering/building-resilient-payment-systems) where
|
||||||
|
the following tidbit was mentioned:
|
||||||
|
|
||||||
|
> We prefer using an Universally Unique Lexicographically Sortable Identifier (ULID) for these
|
||||||
|
> idempotency keys instead of a random version 4 UUID. ULIDs contain a 48-bit timestamp followed by
|
||||||
|
> 80 bits of random data. The timestamp allows ULIDs to be sorted, which works much better with the
|
||||||
|
> b-tree data structure databases use for indexing. In one high-throughput system at Shopify we’ve
|
||||||
|
> seen a 50 percent decrease in INSERT statement duration by switching from UUIDv4 to ULID for
|
||||||
|
> idempotency keys.
|
||||||
|
|
||||||
|
Whoa, that sounds great! But [this youtube
|
||||||
|
video](https://www.youtube.com/watch?v=f53-Iw_5ucA&t=590s) tempered my expectations a bit, by
|
||||||
|
describing the implementation-dependent reasons for that dramatic
|
||||||
|
improvement. Still, switching from UUIDs to ULIDs couldn't *hurt*[^no-stinkin-benches], right? Plus,
|
||||||
|
by encoding the time of creation (at least to the nearest millisecond), I could remove a "created
|
||||||
|
at" field from every table that used them as primary keys. Which, in my case, would be all of them,
|
||||||
|
and I'm worried less about the speed of inserts than I am about keeping total on-disk size down
|
||||||
|
anyway.
|
||||||
|
|
||||||
|
I was actually already familiar with the idea of using time-based sortable IDs, from
|
||||||
|
[KSUIDs](https://github.com/segmentio/ksuid). It's an attractive concept to me, and I'd considered
|
||||||
|
using them from the get-go, but discarded that for two main reasons:
|
||||||
|
|
||||||
|
- they're **FOUR WHOLE BYTES!!!** larger than UUIDs
|
||||||
|
- I'd have to manually implement serialization/deserialization, since SQLx doesn't
|
||||||
|
have native support for them
|
||||||
|
|
||||||
|
In reality, neither of those are real show-stoppers; 20 vs. 16 bytes is probably not that
|
||||||
|
significant, and I'd have to do the manual serialization stuff for anything besides a
|
||||||
|
less-than-8-bytes number or a normal UUID. Still, four bytes is four bytes, and all other things
|
||||||
|
being equal, I'd rather go for the trimmer, 64-bit-aligned value.
|
||||||
|
|
||||||
|
Finally, I'd recently finished with adding some ability to actually interact with data in a
|
||||||
|
meaningful way, and to add new records to the database, which meant that it was now or never for
|
||||||
|
standardizing on a type for the primary keys. I was ready to do this thing.
|
||||||
|
|
||||||
|
# Serial problems
|
||||||
|
|
||||||
|
"Deserilization" is the act of converting a static, non-native representation of some kind of
|
||||||
|
datatype into a dynamic, native computer programming object, so that you can do the right computer
|
||||||
|
programming stuff to it. It can be as simple as when a program reads in a string of digit characters
|
||||||
|
and parses it into a real number, but of course the ceiling on complexity is limitless.
|
||||||
|
|
||||||
|
In my case, it was about getting those sixteen bytes out of the database and turning them into
|
||||||
|
ULIDs. Technically, I could have let Rust [handle that for me](https://serde.rs/derive.html) by
|
||||||
|
automatically deriving that functionality. There were a couple snags with that course, though:
|
||||||
|
|
||||||
|
- the default serialized representation of a ULID in the library I was using to provide them [is as
|
||||||
|
26-character strings](https://docs.rs/ulid/latest/ulid/serde/index.html), and I wanted to use only
|
||||||
|
16 bytes in the database
|
||||||
|
- you could tell it to serialize as a [128-bit
|
||||||
|
number](https://docs.rs/ulid/latest/ulid/serde/ulid_as_u128/index.html), but that merely kicked the
|
||||||
|
problem one step down the road since SQLite can only handle up to 64-bit numbers, as previously
|
||||||
|
discussed, so I'd still have to manually do something for them
|
||||||
|
|
||||||
|
This meant going all-in on fully custom serialization and deserialization, something I'd never done
|
||||||
|
before, but how hard could it be? (spoiler: actually not that hard!)
|
||||||
|
|
||||||
|
## Great coders steal
|
||||||
|
|
||||||
|
Something I appreciate about the [Rust programming language](https://www.rust-lang.org/) is that
|
||||||
|
because of the way the compiler works[^rust-generics], the full source code almost always has to be
|
||||||
|
available to you, the end-user coder. The culture around it is also very biased toward open source,
|
||||||
|
and so all the extremely useful libraries are just sitting there, ready to be studied and copied. So
|
||||||
|
the first thing I did was take a look at how [SQLx handled
|
||||||
|
UUIDs](https://github.com/launchbadge/sqlx/blob/main/sqlx-sqlite/src/types/uuid.rs):
|
||||||
|
|
||||||
|
``` rust
|
||||||
|
impl Type<Sqlite> for Uuid {
|
||||||
|
fn type_info() -> SqliteTypeInfo {
|
||||||
|
SqliteTypeInfo(DataType::Blob)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn compatible(ty: &SqliteTypeInfo) -> bool {
|
||||||
|
matches!(ty.0, DataType::Blob | DataType::Text)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl<'q> Encode<'q, Sqlite> for Uuid {
|
||||||
|
fn encode_by_ref(&self, args: &mut Vec<SqliteArgumentValue<'q>>) -> IsNull {
|
||||||
|
args.push(SqliteArgumentValue::Blob(Cow::Owned(
|
||||||
|
self.as_bytes().to_vec(),
|
||||||
|
)));
|
||||||
|
|
||||||
|
IsNull::No
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl Decode<'_, Sqlite> for Uuid {
|
||||||
|
fn decode(value: SqliteValueRef<'_>) -> Result<Self, BoxDynError> {
|
||||||
|
// construct a Uuid from the returned bytes
|
||||||
|
Uuid::from_slice(value.blob()).map_err(Into::into)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
There's not a ton going on there, as you can see. To "encode" it just gets the bytes out of the
|
||||||
|
UUID, and to "decode" it just gets the bytes out of the database. I couldn't use that exactly as
|
||||||
|
done by the SQLx authors, as they were using datatypes that were private to their crate, but it was
|
||||||
|
close enough; here's mine:
|
||||||
|
|
||||||
|
``` rust
|
||||||
|
impl sqlx::Type<sqlx::Sqlite> for DbId {
|
||||||
|
fn type_info() -> <sqlx::Sqlite as sqlx::Database>::TypeInfo {
|
||||||
|
<&[u8] as sqlx::Type<sqlx::Sqlite>>::type_info()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl<'q> Encode<'q, Sqlite> for DbId {
|
||||||
|
fn encode_by_ref(&self, args: &mut Vec<SqliteArgumentValue<'q>>) -> IsNull {
|
||||||
|
args.push(SqliteArgumentValue::Blob(Cow::Owned(self.bytes().to_vec())));
|
||||||
|
IsNull::No
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl Decode<'_, Sqlite> for DbId {
|
||||||
|
fn decode(value: SqliteValueRef<'_>) -> Result<Self, sqlx::error::BoxDynError> {
|
||||||
|
let bytes = <&[u8] as Decode<Sqlite>>::decode(value)?;
|
||||||
|
let bytes: [u8; 16] = bytes.try_into().unwrap_or_default();
|
||||||
|
Ok(u128::from_ne_bytes(bytes).into())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
(In order to implement the required methods from SQLx, I had to wrap the ULID in a new, custom type,
|
||||||
|
which I called `DbId`, to comply with the [orphan rules](https://github.com/Ixrec/rust-orphan-rules).)
|
||||||
|
|
||||||
|
That's only half the story, though. If all I had to worry about was getting data in and out of the
|
||||||
|
database, that would be fine, but because I'm building a web app, I need to be able to include my
|
||||||
|
new ID type in messages sent over a network or as part of a web page, and for that, it needed to
|
||||||
|
implement some functionality from a different library, called [Serde](https://serde.rs/). My
|
||||||
|
original implementation for *deserializing* looked like this:
|
||||||
|
|
||||||
|
``` rust
|
||||||
|
struct DbIdVisitor;
|
||||||
|
|
||||||
|
impl<'de> Visitor<'de> for DbIdVisitor {
|
||||||
|
type Value = DbId;
|
||||||
|
|
||||||
|
// make a DbId from a slice of bytes
|
||||||
|
fn visit_bytes<E>(self, v: &[u8]) -> Result<Self::Value, E>
|
||||||
|
where
|
||||||
|
E: serde::de::Error,
|
||||||
|
{
|
||||||
|
...
|
||||||
|
}
|
||||||
|
|
||||||
|
// make a DbId from a Vec of bytes
|
||||||
|
fn visit_byte_buf<E>(self, v: Vec<u8>) -> Result<Self::Value, E>
|
||||||
|
where
|
||||||
|
E: serde::de::Error,
|
||||||
|
{
|
||||||
|
...
|
||||||
|
}
|
||||||
|
|
||||||
|
// you get the picture
|
||||||
|
fn visit_string() ...
|
||||||
|
fn visit_u128() ...
|
||||||
|
fn visit_i128() ...
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
In my mind, the only important pieces were the `visit_bytes()` and `visit_byte_buf()` methods,
|
||||||
|
which worked basically the same as the `decode()` function for SQLx. I mean, as far as I could tell,
|
||||||
|
the only time something would be encountering a serialized `DbId` would be in the form of raw bytes
|
||||||
|
from the database; no one else would be trying to serialize one as something else that I didn't
|
||||||
|
anticipate, right?
|
||||||
|
|
||||||
|
RIGHT???
|
||||||
|
|
||||||
|
(wrong)
|
||||||
|
|
||||||
|
## A puzzling failure
|
||||||
|
|
||||||
|
As soon as my code compiled, I ran my tests. Everything passed... except for one, that tested
|
||||||
|
logging in.
|
||||||
|
|
||||||
|
This was very strange. All the other tests were passing, and basically every operation requires
|
||||||
|
getting one of these IDs into or out of the database. But at this point, it was late, and I set it
|
||||||
|
down until the next day.
|
||||||
|
|
||||||
|
# When in doubt, change many things at once
|
||||||
|
|
||||||
|
The next day I sat back down to get back to work, and in the course of examining what was going on,
|
||||||
|
realized that I'd missed something crucial: these things were supposed to be *sortable*. But the way
|
||||||
|
I was inserting them meant that they weren't, because of endianness.
|
||||||
|
|
||||||
|
## More like shmexicographic, amirite
|
||||||
|
|
||||||
|
"ULID" stands for "Universally Unique Lexicographically Sortable
|
||||||
|
Identifier"[^uulsid]. "[Lexicographic order](https://en.wikipedia.org/wiki/Lexicographic_order)"
|
||||||
|
basically means, "like alphabetical, but for anything with a defined total order". Numbers have a
|
||||||
|
defined total order; bigger numbers always go after smaller.
|
||||||
|
|
||||||
|
But sometimes numbers get sorted out of order, if they're not treated as numbers. Like say you had a
|
||||||
|
directory with twelve files in it, called "1.txt" up through "12.txt". If you were to ask to see
|
||||||
|
them listed out in lexicographic order, it would go like:
|
||||||
|
|
||||||
|
``` text
|
||||||
|
$ ls
|
||||||
|
10.txt
|
||||||
|
11.txt
|
||||||
|
12.txt
|
||||||
|
1.txt
|
||||||
|
2.txt
|
||||||
|
3.txt
|
||||||
|
4.txt
|
||||||
|
5.txt
|
||||||
|
6.txt
|
||||||
|
7.txt
|
||||||
|
8.txt
|
||||||
|
9.txt
|
||||||
|
```
|
||||||
|
|
||||||
|
This is because '10' is "less than" '2' (and '0' is "less than" '.', which is why "10.txt" is before "1.txt"). The solution, as all
|
||||||
|
data-entering people know, is to pad the number with leading '0's:
|
||||||
|
|
||||||
|
``` text
|
||||||
|
$ ls
|
||||||
|
01.txt
|
||||||
|
02.txt
|
||||||
|
03.txt
|
||||||
|
04.txt
|
||||||
|
05.txt
|
||||||
|
06.txt
|
||||||
|
07.txt
|
||||||
|
08.txt
|
||||||
|
09.txt
|
||||||
|
10.txt
|
||||||
|
11.txt
|
||||||
|
12.txt
|
||||||
|
```
|
||||||
|
|
||||||
|
Now the names are lexicographically sorted in the right numerical order[^confusing-yes].
|
||||||
|
|
||||||
|
So, now that we're all expert lexicographicographers, we understand that our IDs are just
|
||||||
|
supposed to naturally sort themselves in the correct order, based on when they were created; IDs
|
||||||
|
created later should sort after IDs created earlier.
|
||||||
|
|
||||||
|
The implementation for my ULIDs only guaranteed this property for the string form of them, but I was
|
||||||
|
not storing them in string from. Fundamentally, the ULID was a simple [128-bit primitive
|
||||||
|
integer](https://doc.rust-lang.org/std/primitive.u128.html), capable of holding values between 0 and
|
||||||
|
340,282,366,920,938,463,463,374,607,431,768,211,455.
|
||||||
|
|
||||||
|
But there's a problem: I was storing the ID in the database as a sequence of 16 bytes. I was asking
|
||||||
|
for those bytes in "native endian", which in my case, meant "little endian". If you're not familiar
|
||||||
|
with endianness, there are two varieties: big, and little. "Big" makes the most sense for a lot of
|
||||||
|
people; if you see a number like "512", it's big-endian; the end is the part that's left-most, and
|
||||||
|
"big" means that it is the most-significant-digit. This is the same as what westerners think of as
|
||||||
|
"normal" numbers. In the number "512", the "most significant digit" is `5`, which correspends to
|
||||||
|
`500`, which is added to the next-most-significant digit, `1`, corresponding to `10`, which is added
|
||||||
|
to the next-most-significant digit, which is also the least-most-significant-digit, which is `2`,
|
||||||
|
which is just `2`, giving us the full number `512`.
|
||||||
|
|
||||||
|
If we put the least-significant-digit first, we'd write the number `512` as "215"; the order when
|
||||||
|
written out would be reversed. This means that the lexicographic sort of `512, 521` would have "125"
|
||||||
|
come before "215", which is backwards.
|
||||||
|
|
||||||
|
Little-endianness is like that. If a multibyte numeric value is on a little-endian system, the
|
||||||
|
least-significant bytes will come first, and a lexicographic sorting of those bytes would be
|
||||||
|
non-numeric.
|
||||||
|
|
||||||
|
The solution, though, is simple: just write them out in big-endian order! This was literally a
|
||||||
|
one-line change in the code, to switch from `to_ne_bytes()` ("ne" for "native endian") to
|
||||||
|
`to_be_bytes()`. I confirmed that the bytes written into were being written in the correct
|
||||||
|
lexicographic order:
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
sqlite> select hex(id), username from users order by id asc;
|
||||||
|
018903CDDCAAB0C6872A4509F396D388|first_user
|
||||||
|
018903D0E591525EA42202FF461AA5FA|second_user
|
||||||
|
```
|
||||||
|
|
||||||
|
Note the first six characters are the same, for these two users created some time apart[^ulid-timestamps].
|
||||||
|
|
||||||
|
Boom. "Sorted".
|
||||||
|
|
||||||
|
## The actual problem
|
||||||
|
|
||||||
|
Except that the logins were still broken; it wasn't just the test. What was even stranger is that
|
||||||
|
with advanced debugging techniques[^advanced-debugging], I confirmed that the login *was*
|
||||||
|
working. By which I mean, when the user submitted a login request, the function that handled the
|
||||||
|
request was:
|
||||||
|
|
||||||
|
- correctly confirming password match
|
||||||
|
- retrieving the user from the database
|
||||||
|
|
||||||
|
The second thing was required for the first. It was even creating a session in the session table:
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
sqlite> select * from async_sessions;
|
||||||
|
..|..|{"id":"ZY...","expiry":"...","data":{"_user_id":"[1,137,3,205,220,170,176,198,135,42,69,9,243,150,211,136]","_auth_id":"\"oM..."}}
|
||||||
|
```
|
||||||
|
|
||||||
|
I noticed that the ID was present in the session entry, but as what looked like an array of decimal
|
||||||
|
values. The less not-astute among you may have noticed that the session table seemed to be using
|
||||||
|
JSON to store information. This wasn't my code, but it was easy enough to find the
|
||||||
|
[culprit](https://github.com/http-rs/async-session/blob/d28cef30c7da38f52639b3d60fc8cf4489c92830/src/session.rs#L214):
|
||||||
|
|
||||||
|
``` rust
|
||||||
|
pub fn insert(&mut self, key: &str, value: impl Serialize) -> Result<(), serde_json::Error> {
|
||||||
|
self.insert_raw(key, serde_json::to_string(&value)?);
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
This was in the [external library](https://docs.rs/async-session/latest/async_session/) I was using
|
||||||
|
to provide cookie-based sessions for my web app, and was transitively invoked when I called the
|
||||||
|
`login()` method in my own code. Someone else was serializing my IDs, in a way I hadn't anticipated!
|
||||||
|
|
||||||
|
The way that Serde decides what code to call is based on its [data
|
||||||
|
model](https://serde.rs/data-model.html). And wouldn't you know it, the following words are right
|
||||||
|
there, hiding in plain sight, as they had been all along:
|
||||||
|
|
||||||
|
> When deserializing a data structure from some format, the Deserialize implementation for the data
|
||||||
|
> structure is responsible for mapping the data structure into the Serde data model by passing to
|
||||||
|
> the Deserializer a Visitor implementation that can receive the various types of the data model...
|
||||||
|
>
|
||||||
|
> [...]
|
||||||
|
>
|
||||||
|
> * seq
|
||||||
|
> - A variably sized heterogeneous sequence of values, for example Vec<T> or HashSet<T>. ...
|
||||||
|
>
|
||||||
|
> [...]
|
||||||
|
>
|
||||||
|
> The flexibility around mapping into the Serde data model is profound and powerful. When
|
||||||
|
> implementing Serialize and Deserialize, be aware of the broader context of your type that may make
|
||||||
|
> the most instinctive mapping not the best choice.
|
||||||
|
|
||||||
|
Well, when you put it that way, I can't help but understand: I needed to implement a `visit_seq()`
|
||||||
|
method in my deserialization code.
|
||||||
|
|
||||||
|
![fine, fine, I see the light][see_the_light]
|
||||||
|
<div class = "caption">fine, fine, i see the light</div>
|
||||||
|
|
||||||
|
You can see that
|
||||||
|
[here](https://gitlab.com/nebkor/ww/-/blob/656e6dceedf0d86e2805e000c9821e931958a920/src/db_id.rs#L194-216)
|
||||||
|
if you'd like, but I'll actually come back to it in a second. The important part was that my logins
|
||||||
|
were working again; time to party!
|
||||||
|
|
||||||
|
## Wait, why *isn't* it broken?
|
||||||
|
|
||||||
|
I'd just spent the day banging my head against this problem, and so when everything worked again, I
|
||||||
|
committed and pushed the change and signed off. But something was still bothering me, and the next
|
||||||
|
day, I dove back into it.
|
||||||
|
|
||||||
|
|
||||||
|
All my serialization code was calling a method called
|
||||||
|
[`bytes()`](https://gitlab.com/nebkor/ww/-/blob/656e6dceedf0d86e2805e000c9821e931958a920/src/db_id.rs#L18),
|
||||||
|
which simply called another method that would return an array of 16 bytes, in big-endian order, so
|
||||||
|
it could go into the database and be sortable, as discussed.
|
||||||
|
|
||||||
|
But all[^actually_not_all] my *deserialization* code was constructing the IDs as [though the bytes
|
||||||
|
were
|
||||||
|
*little*-endian](https://gitlab.com/nebkor/ww/-/blob/656e6dceedf0d86e2805e000c9821e931958a920/src/db_id.rs#L212). Which
|
||||||
|
lead me to ask:
|
||||||
|
|
||||||
|
what the fuck?
|
||||||
|
|
||||||
|
Like, everything was *working*. Why did I need to construct from a different byte order? I felt like
|
||||||
|
I was losing my mind, so I reached out to the [Recurse Center](https://www.recurse.com) community
|
||||||
|
and presented my case.
|
||||||
|
|
||||||
|
Basically, I showed that bytes were written correctly, resident in the DB in big-endian form, but
|
||||||
|
then were "backwards" coming out and "had to be" cast using little-endian constructors
|
||||||
|
("`from_ne_bytes()`").
|
||||||
|
|
||||||
|
What had actually happened is that as long as there was agreement about what order to use for reconstructing the
|
||||||
|
ID from the bytes, it didn't matter if it was big or little-endian, it just had to be the same on
|
||||||
|
both the
|
||||||
|
[SQLx](https://gitlab.com/nebkor/ww/-/commit/84d70336d39293294fd47b4cf115c70091552c11#ce34dd57be10530addc52a3273548f2b8d3b8a9b_106_105)
|
||||||
|
side and on the
|
||||||
|
[Serde](https://gitlab.com/nebkor/ww/-/commit/84d70336d39293294fd47b4cf115c70091552c11#ce34dd57be10530addc52a3273548f2b8d3b8a9b_210_209)
|
||||||
|
side. This is also irrespective of the order they were written out in, but again, the two sides must
|
||||||
|
agree on the convention used. Inside the Serde method, I had added some debug printing of the bytes
|
||||||
|
it was getting, and they were in little-endian order. What I had not realized is that that was
|
||||||
|
because they were first passing through the SQLx method which reversed them.
|
||||||
|
|
||||||
|
Mmmmm, delicious, delicous red herring.
|
||||||
|
|
||||||
|
Two people were especially helpful, Julia Evans and Nicole Tietz-Sokolskaya; Julia grabbed a copy of
|
||||||
|
my database file and poked it with Python, and could not replicate the behavior I was seeing, and
|
||||||
|
Nicole did the same but with a little Rust program she wrote. Huge thanks to both of them (but not
|
||||||
|
just them) for the extended [rubber ducking](https://en.wikipedia.org/wiki/Rubber_duck_debugging)!
|
||||||
|
And apologies for the initial gas-lighting; Julia was quite patient and diplomatic when pushing back
|
||||||
|
against "the bytes are coming out of the db backwards".
|
||||||
|
|
||||||
|
|
||||||
|
# Lessons learned
|
||||||
|
|
||||||
|
Welp, here we are, the end of the line; I hope this has been informative, or barring that, at least
|
||||||
|
entertaining. Or the other way around, I'm not that fussy!
|
||||||
|
|
||||||
|
Obviously, the biggest mistake was to futz with being clever about endianness before understanding
|
||||||
|
why the login code was now failing. Had I gotten it working correctly first, I would have been able to
|
||||||
|
figure out the requirement for agreement on convention between the two different serialization
|
||||||
|
systems much sooner, and I would not have wasted mine and others' time on misunderstanding.
|
||||||
|
|
||||||
|
On the other hand, it's hard to see these things on the first try, especially when you're on your
|
||||||
|
own, and are on your first fumbling steps in a new domain or ecosystem; for me, that was getting
|
||||||
|
into the nitty-gritty with Serde, and for that matter, dealing directly with serialization-specific
|
||||||
|
issues. Collaboration is a great technique for navigating these situations, and I definitely need to
|
||||||
|
focus a bit more on enabling that[^solo-yolo-dev].
|
||||||
|
|
||||||
|
In the course of debugging this issue, I tried to get more insight via
|
||||||
|
[testing](https://gitlab.com/nebkor/ww/-/commit/656e6dceedf0d86e2805e000c9821e931958a920#ce34dd57be10530addc52a3273548f2b8d3b8a9b_143_251),
|
||||||
|
and though that helped a little, it was not nearly enough; the problem was that I misunderstood how
|
||||||
|
something worked, not that I had mistakenly implemented something I was comfortable with. Tests
|
||||||
|
aren't a substitute for understanding!
|
||||||
|
|
||||||
|
And of course, I'm now much more confident and comfortable with Serde; reading the Serde code for
|
||||||
|
other things, like [UUIDs](https://github.com/uuid-rs/uuid/blob/main/src/external/serde_support.rs),
|
||||||
|
is no longer an exercise in eye-glaze-control. Maybe this has helped you with that too?
|
||||||
|
|
||||||
|
----
|
||||||
|
|
||||||
|
[^uuidv4_random]: Technically, most v4 UUIDs have only 122 random bits, as six out of 128 are
|
||||||
|
reserved for version information.
|
||||||
|
|
||||||
|
[^blob-of-bytes]: Some databases have direct support for 128-bit primitive values (numbers). The
|
||||||
|
database I'm using, SQLite, only supports up to 64-bit primitive values, but it does support
|
||||||
|
arbitrary-length sequences of bytes called "blobs".
|
||||||
|
|
||||||
|
[^sqlite-dataclasses]: I'm using [SQLite](https://www.sqlite.org/index.html) for reasons that I plan
|
||||||
|
to dive into in a different post, but "blob" is specific to it. In general, you'll probably want
|
||||||
|
to take advantage of implementation-specific features of whatever database you're using, which
|
||||||
|
means that your table definitions won't be fully portable to a different database. This is fine
|
||||||
|
and good, actually!
|
||||||
|
|
||||||
|
[^no-stinkin-benches]: You may wonder: have I benchmarked this system with UUIDs vs. ULIDs? Ha ha,
|
||||||
|
you must have never met a programmer before! No, of course not. But, that's coming in a
|
||||||
|
follow-up.
|
||||||
|
|
||||||
|
[^rust-generics]: If the code you're using has
|
||||||
|
[generics](https://doc.rust-lang.org/book/ch10-01-syntax.html) in it, then the compiler needs to
|
||||||
|
generate specialized versions of that generic code based on how you use it; this is called
|
||||||
|
"[monomorphization](https://doc.rust-lang.org/book/ch10-01-syntax.html#performance-of-code-using-generics)",
|
||||||
|
and it requires the original generic source to work. That's also true in C++, which is why most
|
||||||
|
templated code is [header-only](https://isocpp.org/wiki/faq/templates#templates-defn-vs-decl),
|
||||||
|
but Rust doesn't have header files.
|
||||||
|
|
||||||
|
[^uulsid]: I guess the extra 'U' and 'S' are invisible.
|
||||||
|
|
||||||
|
[^confusing-yes]: Is this confusing? Yes, 100%, it is not just you. Don't get discouraged.
|
||||||
|
|
||||||
|
[^ulid-timestamps]: The 6 most-significant bytes make up the timestamp in a ULID, which in the hex
|
||||||
|
dump form pasted there would be the first twelve characters, since each byte is two hex
|
||||||
|
digits.
|
||||||
|
|
||||||
|
[^advanced-debugging]: "adding `dbg!()` statements in the code"
|
||||||
|
|
||||||
|
[^actually_not_all]: Upon further review, I discovered that the only methods that were constructing
|
||||||
|
with little-endian order were the SQLx `decode()` method, and the Serde `visit_seq()` method,
|
||||||
|
which were also the only ones that were being called at all. The
|
||||||
|
[`visit_bytes()`](https://gitlab.com/nebkor/ww/-/blob/656e6dceedf0d86e2805e000c9821e931958a920/src/db_id.rs#L152)
|
||||||
|
and `visit_byte_buf()` methods, that I had thought were so important, were correctly treating
|
||||||
|
the bytes as big-endian, but were simply never actually used. I fixed [in the next
|
||||||
|
commit](https://gitlab.com/nebkor/ww/-/commit/84d70336d39293294fd47b4cf115c70091552c11#ce34dd57be10530addc52a3273548f2b8d3b8a9b)
|
||||||
|
|
||||||
|
[^solo-yolo-dev]: I've described my current practices as "solo-yolo", which has its plusses and
|
||||||
|
minuses, as you may imagine.
|
||||||
|
|
||||||
|
|
||||||
|
[thats_a_database]: ./thats_a_database.png "simpsons that's-a-paddling guy"
|
||||||
|
|
||||||
|
[see_the_light]: ./seen_the_light.png "jake blues seeing the light"
|
BIN
content/rnd/a_serialized_mystery/seen_the_light.png
Normal file
After Width: | Height: | Size: 330 KiB |
BIN
content/rnd/a_serialized_mystery/thats_a_database.png
Normal file
After Width: | Height: | Size: 253 KiB |
After Width: | Height: | Size: 514 KiB |
After Width: | Height: | Size: 1 MiB |
After Width: | Height: | Size: 1.9 MiB |
After Width: | Height: | Size: 165 KiB |
After Width: | Height: | Size: 5.4 KiB |
After Width: | Height: | Size: 885 KiB |
After Width: | Height: | Size: 229 KiB |
BIN
content/sundries/a-thoroughly-digital-artifact/final_printed.jpg
Normal file
After Width: | Height: | Size: 888 KiB |
BIN
content/sundries/a-thoroughly-digital-artifact/final_shasta.png
Normal file
After Width: | Height: | Size: 301 KiB |
BIN
content/sundries/a-thoroughly-digital-artifact/geotiff-files.png
Normal file
After Width: | Height: | Size: 30 KiB |
900
content/sundries/a-thoroughly-digital-artifact/index.md
Normal file
|
@ -0,0 +1,900 @@
|
||||||
|
+++
|
||||||
|
title = "A Thoroughly Digital Artifact"
|
||||||
|
slug = "a-thoroughly-digital-artifact"
|
||||||
|
date = "2023-01-19"
|
||||||
|
updated = "2023-01-21"
|
||||||
|
[taxonomies]
|
||||||
|
tags = ["3dprinting", "CAD", "GIS", "CNC", "art", "sundry", "proclamation", "research"]
|
||||||
|
+++
|
||||||
|
|
||||||
|
![A plywood slab carved with CNC into a topographic representation of California][main_image]
|
||||||
|
|
||||||
|
# A birthday wish
|
||||||
|
|
||||||
|
Last summer, I wanted to get my wife something nice for her birthday. For many years, she had
|
||||||
|
expressed an occasional and casual desire for a topographic carving of the state of California,
|
||||||
|
where we live, and I thought it might be something I could figure out how to get her. In the end,
|
||||||
|
after many dozens of hours of work, five weeks, and several hundred dollars paid to a professional
|
||||||
|
CNC machine shop, I had the artifact shown in the picture above. This is the story of its creation,
|
||||||
|
starting from knowing almost nothing about GIS, cartography, or CNC machining.
|
||||||
|
|
||||||
|
# First steps
|
||||||
|
|
||||||
|
Before you ask, I did not do a ton of research before embarking on this. As I write this, about six
|
||||||
|
months later, it only now occurred to me to do a basic search for an actual physical thing I could
|
||||||
|
buy, and luckily it seems that CNC-carved wooden relief maps of the whole state are not trivially
|
||||||
|
easy to come by, so, *phew!*
|
||||||
|
|
||||||
|
No, my first step was to see if there were any shops in the area that could carve something out of
|
||||||
|
nice plywood, about a week before the intended recipient's birthday. I found one that was less than
|
||||||
|
ten minutes away, and filled out their web contact form. They had a field for material, and I said,
|
||||||
|
"some nice plywood between 0.75 and 1.0 inches thick or similar" (I didn't know exactly what was
|
||||||
|
available and wanted to give broad acceptable parameters), and under "project description", I wrote,
|
||||||
|
|
||||||
|
> A relief map of California, carved from wood. Height exaggerated enough
|
||||||
|
to visibly discern the Santa Monica mountains. I can provide an STL file if needed.
|
||||||
|
|
||||||
|
For some [incorrect] reason that I only later examined[^introspection], I just sort of assumed that the shop would
|
||||||
|
have a library of shapes available for instantiating into whatever material medium you might
|
||||||
|
need. But just in case, I included that hedge about being able to provide an STL file. Needless to
|
||||||
|
say, that was a bluff.
|
||||||
|
|
||||||
|
![the programmer's creed: we do these things not because they are easy, but because we thought they
|
||||||
|
were going to be easy -- from twitter user @unoservix, 2016-08-05][programmers_creed]
|
||||||
|
*<center><sup><sub>me, every. single. time.</sub></sup></center>*
|
||||||
|
|
||||||
|
Also needless to say, my bluff was immediately called, and I had the following exchange with the
|
||||||
|
shop:
|
||||||
|
|
||||||
|
> *CNC Shop*: STL can work but I can’t manipulate it, which could save some money. If possible can it
|
||||||
|
>be exported to an .igs or .iges or .stp format?
|
||||||
|
>
|
||||||
|
> *Me*: Yeah, STP should be no problem. Can you give a rough estimate of the cost for 1x2-foot relief carving?
|
||||||
|
>
|
||||||
|
> *Shop*: Without seeing the drawings, I can’t give even a close price but in the past they range from
|
||||||
|
>a few hundred dollars to several thousand dollars.
|
||||||
|
>
|
||||||
|
> *Me*: That's totally fair! I'll get you some files in a few days.
|
||||||
|
|
||||||
|
"STP should be no problem ... I'll get you some files in a few days," was an even harder lean into
|
||||||
|
the bluff; my next communication with the shop was nearly four weeks later. But that's getting ahead
|
||||||
|
of things.
|
||||||
|
|
||||||
|
# Meshes and solid bodies
|
||||||
|
|
||||||
|
First off, let's talk about file formats and how to represent shapes with a
|
||||||
|
computer.[^math-computers] I first said I could provide an *STL
|
||||||
|
file*. [STL](https://en.wikipedia.org/wiki/STL_(file_format)) is a pretty bare-bones format that
|
||||||
|
describes the outside surface of a shape as a mesh of many, many triangles, each of which is
|
||||||
|
described by three 3D points, where each point (but not necessarily each edge) of the triangle lies
|
||||||
|
on the surface of the shape of the thing you're modeling. This format is popular with 3D printers,
|
||||||
|
which is how I became familiar with it.
|
||||||
|
|
||||||
|
STL is simple to implement and easy for a computer to read, but if you have a model in that format
|
||||||
|
that you need to manipulate, like you want to merge it with another shape, you won't have a good
|
||||||
|
time. In order to actually do things like that, it needs to be converted into a CAD program's native
|
||||||
|
representation of a "solid body", which is pretty much what it sounds like: a shape made of a finite
|
||||||
|
volume of "stuff", and NOT just an infinitesimally thin shell enclosing an empty volume, which is
|
||||||
|
what a mesh is.
|
||||||
|
|
||||||
|
In order for the CAD program to convert a mesh into a solid body, the mesh must be *manifold*,
|
||||||
|
meaning, no missing faces (triangles), and with a clearly-defined interior and exterior (all
|
||||||
|
triangles are facing in one direction relative to their interior). When there are no missing faces,
|
||||||
|
it's called "water tight". You can still have "holes" in a mesh, like if you have a model of a
|
||||||
|
donut[^manifold_holes], but the surface of the donut can't have any missing faces. A valid STL
|
||||||
|
file's meshes are manifold.
|
||||||
|
|
||||||
|
The CNC shop had requested a model in a format called
|
||||||
|
[ST**P**](https://www.fastradius.com/resources/everything-you-need-to-know-about-step-files/). `.stp`
|
||||||
|
is the extension for a "STEP" file; STEP is supposed to be short for "standard for the exchange of
|
||||||
|
product data", so someone was playing pretty fast and loose with their initialisms, but I
|
||||||
|
digress. The main thing about STEP files is that CAD programs can really easily convert them into
|
||||||
|
their native internal solid body representation, which allows easy manipulation. <a
|
||||||
|
name="prophecy"></a> Another thing about them is that a CAD program can usually turn a manifold mesh
|
||||||
|
into an STP file, unless the mesh is too complicated and your computer doesn't have enough RAM
|
||||||
|
(*note: foreshadowing*[^chekhovs-ram]).
|
||||||
|
|
||||||
|
![an overly-complicated mesh of a cube][meshy-cube]
|
||||||
|
*<center><sup><sub>this cube's mesh has too many vertices and edges, I hope my computer has enough
|
||||||
|
RAM to work with it</sub></sup></center>*
|
||||||
|
|
||||||
|
But at that moment, I had nothing at all. Time to get some data and see if I can turn it into a model.
|
||||||
|
|
||||||
|
# Public data
|
||||||
|
|
||||||
|
My first impulse was to search [USGS](https://usgs.gov)'s website for [digital elevation
|
||||||
|
map](https://en.wikipedia.org/wiki/Digital_elevation_model) data, but I wound up not finding
|
||||||
|
anything appropriate. Searching now with the wisdom of experience and hindsight, I found this, which
|
||||||
|
would have been perfect:
|
||||||
|
|
||||||
|
<https://apps.nationalmap.gov/downloader/>
|
||||||
|
|
||||||
|
Did I just accidentally miss it then? Did I find it and not recognize its utility because I didn't
|
||||||
|
know what I was doing *at all*? The world may never know, but at least now you can benefit from my
|
||||||
|
many, many missteps.
|
||||||
|
|
||||||
|
## From space?
|
||||||
|
|
||||||
|
Anyway, having not found anything I could really use from the USGS through all fault of my own, I
|
||||||
|
found [this site](https://portal.opentopography.org/raster?opentopoID=OTSRTM.082015.4326.1), from
|
||||||
|
OpenTopography, an organization run by the UCSD Supercomputer Center, under a grant from the
|
||||||
|
National Science Foundation. So, still hooray for public data!
|
||||||
|
|
||||||
|
That particular page is for a particular dataset; in this case, "[SRTM
|
||||||
|
GL1](http://www2.jpl.nasa.gov/srtm/) Global 30m". "SRTM" stands for "[Shuttle Radar Topography
|
||||||
|
Mission](https://en.wikipedia.org/wiki/Shuttle_Radar_Topography_Mission)", which was a Space Shuttle
|
||||||
|
mission in February, 2000, where it did a [fancy radar
|
||||||
|
scan](https://en.wikipedia.org/wiki/Interferometric_synthetic-aperture_radar) of most of the land on
|
||||||
|
Earth. Though, it's hard to verify that the data was not synthesized with other datasets of more
|
||||||
|
recent, non-space origin, especially in places like California. But probably space was involved in
|
||||||
|
some way.
|
||||||
|
|
||||||
|
## In Australia, it's pronounced "g'dal"
|
||||||
|
|
||||||
|
Anyway, I'd found an open source of public data. This dataset's [horizontal resolution is 1 arc
|
||||||
|
second](https://gisgeography.com/srtm-shuttle-radar-topography-mission/) (which is why it's
|
||||||
|
"GL**1**"), or roughly 30x30 meters, and the height data is accurate to within 16 meters. Not too
|
||||||
|
shabby!
|
||||||
|
|
||||||
|
They provided the data in the form of [GeoTIFF](https://en.wikipedia.org/wiki/GeoTIFF)s, which are
|
||||||
|
basically an image where each pixel represents one data point (so, a 30x30 square meter plot)
|
||||||
|
centered at a particular location on the Earth's surface. It's a monochrome image, where absolute
|
||||||
|
height is mapped to absolute brightness of each pixel, and each pixel represents an exact location
|
||||||
|
in the world.
|
||||||
|
|
||||||
|
The only problem was that you could only download data covering up to 450,000 square kilometers at a
|
||||||
|
time, so I had had to download a bunch of separate files and then mosaic them together. Luckily,
|
||||||
|
there's a whole suite of open source tools called
|
||||||
|
[GDAL](https://gdal.org/faq.html#what-does-gdal-stand-for). Among that suite is a tool called
|
||||||
|
`gdal_merge.py` (yes, the `.py` is part of the name of the tool that gets installed to your system
|
||||||
|
when you install the GDAL tools), which does exactly what I wanted:
|
||||||
|
|
||||||
|
> `gdal_merge.py -o ca_topo.tif norcal_topo.tif centcal_topo.tif socal_topo.tif so_cent_cal_topo.tif norcal_topo_redux.tif last_bit.tif east_ca.tif`
|
||||||
|
|
||||||
|
This produced a file called `ca_topo.tif`. It was very large, in every sense:
|
||||||
|
|
||||||
|
![listing of tif files with sizes][geotiff-files]
|
||||||
|
*<center><sup><sub>last_little_piece_i_swear_final_final2.tif</sub></sup></center>*
|
||||||
|
|
||||||
|
Using [another tool](https://gdal.org/programs/gdalinfo.html) called `gdalinfo`, we can examine the
|
||||||
|
metadata of the mosaic we just created:
|
||||||
|
|
||||||
|
``` text
|
||||||
|
$ gdalinfo -mm ca_topo.tif
|
||||||
|
Driver: GTiff/GeoTIFF
|
||||||
|
Files: ca_topo.tif
|
||||||
|
Size is 40757, 35418
|
||||||
|
Coordinate System is:
|
||||||
|
GEOGCRS["WGS 84",
|
||||||
|
DATUM["World Geodetic System 1984",
|
||||||
|
ELLIPSOID["WGS 84",6378137,298.257223563,
|
||||||
|
LENGTHUNIT["metre",1]]],
|
||||||
|
PRIMEM["Greenwich",0,
|
||||||
|
ANGLEUNIT["degree",0.0174532925199433]],
|
||||||
|
CS[ellipsoidal,2],
|
||||||
|
AXIS["geodetic latitude (Lat)",north,
|
||||||
|
ORDER[1],
|
||||||
|
ANGLEUNIT["degree",0.0174532925199433]],
|
||||||
|
AXIS["geodetic longitude (Lon)",east,
|
||||||
|
ORDER[2],
|
||||||
|
ANGLEUNIT["degree",0.0174532925199433]],
|
||||||
|
ID["EPSG",4326]]
|
||||||
|
Data axis to CRS axis mapping: 2,1
|
||||||
|
Origin = (-125.109583333326071,42.114305555553187)
|
||||||
|
Pixel Size = (0.000277777777778,-0.000277777777778)
|
||||||
|
Metadata:
|
||||||
|
AREA_OR_POINT=Area
|
||||||
|
Image Structure Metadata:
|
||||||
|
INTERLEAVE=BAND
|
||||||
|
Corner Coordinates:
|
||||||
|
Upper Left (-125.1095833, 42.1143056) (125d 6'34.50"W, 42d 6'51.50"N)
|
||||||
|
Lower Left (-125.1095833, 32.2759722) (125d 6'34.50"W, 32d16'33.50"N)
|
||||||
|
Upper Right (-113.7881944, 42.1143056) (113d47'17.50"W, 42d 6'51.50"N)
|
||||||
|
Lower Right (-113.7881944, 32.2759722) (113d47'17.50"W, 32d16'33.50"N)
|
||||||
|
Center (-119.4488889, 37.1951389) (119d26'56.00"W, 37d11'42.50"N)
|
||||||
|
Band 1 Block=40757x1 Type=Int16, ColorInterp=Gray
|
||||||
|
Computed Min/Max=-130.000,4412.000
|
||||||
|
```
|
||||||
|
|
||||||
|
If I may draw your attention to a couple things there, the image is 40,757 pixels wide and 35,418
|
||||||
|
pixels tall. The "pixel size" is 0.000277777777778 by 0.000277777777778; the units, given by the
|
||||||
|
"angleunit", is degrees; 1 arc second is 1/3600th of a degree, which is 0.01754... They're degrees
|
||||||
|
of arc along the surface of the Earth[^wgs-ellipsoid], at a distance measured from the center of the
|
||||||
|
planet. As previously mentioned, that translates into a size of roughly 30 meters. So if you were
|
||||||
|
ever curious about how many 100-ish-foot squares you'd need to fill a rectangle that fully enclosed
|
||||||
|
the entire border of California, then one billion, four-hundred-forty-three million,
|
||||||
|
five-hundred-thirty-one thousand, and four-hundred-twenty-six (40,757 times 35,418) is pretty close.
|
||||||
|
|
||||||
|
The other units in there are under the "Coordinate System is" section, and are meters relative to
|
||||||
|
the [World Geodetic System 1984](https://en.wikipedia.org/wiki/World_Geodetic_System) vertical datum
|
||||||
|
(distances from this reference surface in the dataset are within 16 meters of the true distance in
|
||||||
|
reality); the very last line is the lowest and highest points in file, which are <a
|
||||||
|
name="minmax-height"></a>-130 meters and 4,412 meters respectively, relative to the baseline height
|
||||||
|
defined by the WGS84 ellipsoid. If you were to view the file with an image viewer, it would look
|
||||||
|
like this:<a name="raw-dem"></a>
|
||||||
|
|
||||||
|
![the ca_topo image; it's hard to make out details and very dark][small_ca_topo]
|
||||||
|
*<center><sup><sub>if you squint, you can kinda see the mountains</sub></sup></center>*
|
||||||
|
|
||||||
|
It's almost completely black because the highest possible value an image like that could have for a
|
||||||
|
pixel is 65,535[^16-bit-ints], and the highest point in our dataset is only 4,412, which is not that
|
||||||
|
much in comparison. Plus, it includes portions of not-California in the height data, and ideally, we
|
||||||
|
want those places to not be represented in our dataset; we have a little more processing to do
|
||||||
|
before we can use this.
|
||||||
|
|
||||||
|
## Cartography is complicated
|
||||||
|
|
||||||
|
The first order of business is to mask out everything that's not California, and the first thing I
|
||||||
|
needed for that was a [shapefile](https://en.wikipedia.org/wiki/Shapefile) that described the
|
||||||
|
California state border. Luckily, [that exact
|
||||||
|
thing](https://data.ca.gov/dataset/ca-geographic-boundaries) is publicly available from the state's
|
||||||
|
website; thank you, State of California!
|
||||||
|
|
||||||
|
There was only one issue: the shapefile was in a different [map
|
||||||
|
projection](https://en.wikipedia.org/wiki/Map_projection) than the data in our geotiff file. A "map
|
||||||
|
projection" is just the term for how you display a curved, 3D shape (like the border of a state on the
|
||||||
|
curved surface of the Earth) on a flat, 2D surface, like a map. If you look at the line in the
|
||||||
|
output of `gdalinfo` above that says, `ID["EPSG",4326]`, that is telling us the particular
|
||||||
|
projection used. [EPSG 4326](https://en.wikipedia.org/wiki/EPSG_Geodetic_Parameter_Dataset) uses
|
||||||
|
latitude and longitude, expressed in degrees, covers the entire Earth including the poles, and
|
||||||
|
references the WGS84 ellipsoid as the ground truth.
|
||||||
|
|
||||||
|
The shapefile was in a projection called [EPSG
|
||||||
|
3857](https://en.wikipedia.org/wiki/Web_Mercator_projection), or "Web Mercator". This is similar to
|
||||||
|
EPSG 4326, except instead of using the WGS84 ellipsoid, it pretends the Earth is a perfect
|
||||||
|
sphere. It only covers +/- 85-ish degrees of latitude (so not the poles), and it uses meters instead
|
||||||
|
of degrees of lat/long. It's popular with online map services (like Google Maps and Open Street
|
||||||
|
Maps) for displaying maps, hence the name, "Web Mercator", so you'd probably recognize the shapes of
|
||||||
|
things in it.
|
||||||
|
|
||||||
|
Once again, there's a [handy GDAL tool](https://gdal.org/programs/gdalwarp.html), `gdalwarp`, which
|
||||||
|
is for reprojecting geotiffs. So all we have to do is take our 4326-projected geotiff, use
|
||||||
|
`gdalwarp` to project it to 3857/Web Mercator, and then we can use the shapefile to mask off all
|
||||||
|
other height data outside the border of California.
|
||||||
|
|
||||||
|
It's almost *too* easy.
|
||||||
|
|
||||||
|
> gdalwarp -t_srs EPSG:3857 ca_topo.tif ca_topo_mercator.tif
|
||||||
|
|
||||||
|
This gives us a 3857-projected file called `ca_topo_mercator.tif`. It still has over a billion
|
||||||
|
pixels in it (it's a little bigger overall, but the aspect is
|
||||||
|
much wider, with the different projection); scaling it down will be a very last step, since at that
|
||||||
|
point, it will no longer be a digital elevation map, it will just be an image. We'll get there,
|
||||||
|
just not yet.
|
||||||
|
|
||||||
|
Cracking open `gdalinfo`, we get:
|
||||||
|
|
||||||
|
``` text
|
||||||
|
$ gdalinfo ca_topo_mercator.tif
|
||||||
|
Driver: GTiff/GeoTIFF
|
||||||
|
Files: ca_topo_mercator.tif
|
||||||
|
Size is 36434, 39852
|
||||||
|
Coordinate System is:
|
||||||
|
PROJCRS["WGS 84 / Pseudo-Mercator",
|
||||||
|
BASEGEOGCRS["WGS 84",
|
||||||
|
ENSEMBLE["World Geodetic System 1984 ensemble",
|
||||||
|
MEMBER["World Geodetic System 1984 (Transit)"],
|
||||||
|
MEMBER["World Geodetic System 1984 (G730)"],
|
||||||
|
MEMBER["World Geodetic System 1984 (G873)"],
|
||||||
|
MEMBER["World Geodetic System 1984 (G1150)"],
|
||||||
|
MEMBER["World Geodetic System 1984 (G1674)"],
|
||||||
|
MEMBER["World Geodetic System 1984 (G1762)"],
|
||||||
|
MEMBER["World Geodetic System 1984 (G2139)"],
|
||||||
|
ELLIPSOID["WGS 84",6378137,298.257223563,
|
||||||
|
LENGTHUNIT["metre",1]],
|
||||||
|
ENSEMBLEACCURACY[2.0]],
|
||||||
|
PRIMEM["Greenwich",0,
|
||||||
|
ANGLEUNIT["degree",0.0174532925199433]],
|
||||||
|
ID["EPSG",4326]],
|
||||||
|
CONVERSION["Popular Visualisation Pseudo-Mercator",
|
||||||
|
METHOD["Popular Visualisation Pseudo Mercator",
|
||||||
|
ID["EPSG",1024]],
|
||||||
|
PARAMETER["Latitude of natural origin",0,
|
||||||
|
ANGLEUNIT["degree",0.0174532925199433],
|
||||||
|
ID["EPSG",8801]],
|
||||||
|
PARAMETER["Longitude of natural origin",0,
|
||||||
|
ANGLEUNIT["degree",0.0174532925199433],
|
||||||
|
ID["EPSG",8802]],
|
||||||
|
PARAMETER["False easting",0,
|
||||||
|
LENGTHUNIT["metre",1],
|
||||||
|
ID["EPSG",8806]],
|
||||||
|
PARAMETER["False northing",0,
|
||||||
|
LENGTHUNIT["metre",1],
|
||||||
|
ID["EPSG",8807]]],
|
||||||
|
CS[Cartesian,2],
|
||||||
|
AXIS["easting (X)",east,
|
||||||
|
ORDER[1],
|
||||||
|
LENGTHUNIT["metre",1]],
|
||||||
|
AXIS["northing (Y)",north,
|
||||||
|
ORDER[2],
|
||||||
|
LENGTHUNIT["metre",1]],
|
||||||
|
USAGE[
|
||||||
|
SCOPE["Web mapping and visualisation."],
|
||||||
|
AREA["World between 85.06°S and 85.06°N."],
|
||||||
|
BBOX[-85.06,-180,85.06,180]],
|
||||||
|
ID["EPSG",3857]]
|
||||||
|
Data axis to CRS axis mapping: 1,2
|
||||||
|
Origin = (-13927135.110024485737085,5178117.270359318703413)
|
||||||
|
Pixel Size = (34.591411839078859,-34.591411839078859)
|
||||||
|
Metadata:
|
||||||
|
AREA_OR_POINT=Area
|
||||||
|
Image Structure Metadata:
|
||||||
|
INTERLEAVE=BAND
|
||||||
|
Corner Coordinates:
|
||||||
|
Upper Left (-13927135.110, 5178117.270) (125d 6'34.50"W, 42d 6'51.50"N)
|
||||||
|
Lower Left (-13927135.110, 3799580.326) (125d 6'34.50"W, 32d16'33.21"N)
|
||||||
|
Upper Right (-12666831.611, 5178117.270) (113d47'17.10"W, 42d 6'51.50"N)
|
||||||
|
Lower Right (-12666831.611, 3799580.326) (113d47'17.10"W, 32d16'33.21"N)
|
||||||
|
Center (-13296983.361, 4488848.798) (119d26'55.80"W, 37d21'21.69"N)
|
||||||
|
Band 1 Block=36434x1 Type=Int16, ColorInterp=Gray
|
||||||
|
```
|
||||||
|
|
||||||
|
You can see that the `PROJCRS[ID]` value is `"EPSG,3857"`, as expected. The "pixel size" is
|
||||||
|
"34.591411...." and the "lengthunit" is "metre". But the number of pixels is different, and the
|
||||||
|
shape is different, yet the coordinates of the bounding corners are the same as the original file's
|
||||||
|
(the latitude and longitude given as the second tuple). This is all from the Web Mercator's different
|
||||||
|
projection causing the aspect ratio to stretch horizontally, but it still represents the same area
|
||||||
|
of the planet.
|
||||||
|
|
||||||
|
## The one custom script
|
||||||
|
|
||||||
|
Now that we had our geotiff in the right projection, the next step was use the shapefile to mask out
|
||||||
|
the California border in it. Here is where GDAL failed me, and looking around now as I
|
||||||
|
write this, I still can't find a specific GDAL tool for doing it. Given how useful I found all the
|
||||||
|
other tools, I can't really complain, so I won't! It wasn't that hard to write something that would
|
||||||
|
do it with other open source tools; I didn't even bother checking this into a git repo or anything:
|
||||||
|
|
||||||
|
``` python
|
||||||
|
#!/usr/bin/env python3
|
||||||
|
|
||||||
|
import fiona # for reading the shapefile
|
||||||
|
import rasterio # for working with the geotiff
|
||||||
|
import rasterio.mask as rmask
|
||||||
|
|
||||||
|
import sys
|
||||||
|
|
||||||
|
def main():
|
||||||
|
tif = sys.argv[1]
|
||||||
|
msk = sys.argv[2]
|
||||||
|
out = sys.argv[3]
|
||||||
|
|
||||||
|
print("input: {tif}\nmask: {msk}\noutput: {out}".format(tif=tif, msk=msk, out=out))
|
||||||
|
if input("Enter 'y' to continue: ").lower() != 'y': # double-check I don't stomp something I wanted to keep
|
||||||
|
print("See ya.")
|
||||||
|
return
|
||||||
|
|
||||||
|
with fiona.open(msk, "r") as shapefile:
|
||||||
|
shapes = [feature["geometry"] for feature in shapefile]
|
||||||
|
|
||||||
|
with rasterio.open(tif) as in_tif:
|
||||||
|
out_image, out_xform = rmask.mask(in_tif, shapes, filled=True, crop=True)
|
||||||
|
out_meta = in_tif.meta
|
||||||
|
out_meta.update({"driver": "GTiff",
|
||||||
|
"height": out_image.shape[1],
|
||||||
|
"width": out_image.shape[2],
|
||||||
|
"transform": out_xform})
|
||||||
|
for k, v in out_meta.items():
|
||||||
|
print("{}: {}".format(k, v)) # just outta curiosity
|
||||||
|
|
||||||
|
with rasterio.open(out, "w", **out_meta) as dest:
|
||||||
|
dest.write(out_image)
|
||||||
|
|
||||||
|
print("Wrote masked tif to {}".format(out))
|
||||||
|
|
||||||
|
return
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
||||||
|
```
|
||||||
|
|
||||||
|
I include that just in case anyone else ever needs to do this, and doesn't find one of the hundreds
|
||||||
|
of other examples out there already. This one is nice because you don't need to pre-process the
|
||||||
|
shapefile into [GeoJSON](https://geojson.org/) or anything, the
|
||||||
|
[Fiona](https://pypi.org/project/Fiona/1.4.2/) package handles things like that transparently for
|
||||||
|
you, but don't think this is great Python or anything; it's the dumbest, quickest thing I could crap
|
||||||
|
out to do the task I needed to be done[^the-real-treasure-is-the-gd-treasure].
|
||||||
|
|
||||||
|
After running that script, I had a Web Mercator-projected geotiff that included data only for places
|
||||||
|
inside the state border of California. It was still enormous; the mask didn't change the shape and
|
||||||
|
you can't have non-rectangular images anyway, but at this point, I had the final desired
|
||||||
|
dataset. It was time to turn it into a heightmap that we could use to make a mesh.
|
||||||
|
|
||||||
|
## A usable heightmap
|
||||||
|
|
||||||
|
I've been trying to be careful about referring to the image file as a "dataset" or "geotiff", vs. a
|
||||||
|
"heightmap". A geotiff file is not a regular image file, it includes particular metadata and data
|
||||||
|
that is meant to be interpreted as a real map of the land; each pixel in it says something about an exact,
|
||||||
|
actual location in the real world. In our geotiff, a mountain would have to be more than twelve
|
||||||
|
miles high before it appeared as bright white[^zero-pixel-value].
|
||||||
|
|
||||||
|
A "heightmap" is an image file, like a geotiff, where each pixel's monochromatic intensity is meant
|
||||||
|
to represent height above some lowest plane. The difference is that the height values are *normalized*
|
||||||
|
so that the lowest height is 0, and the highest is the maximum possible value in the format's value
|
||||||
|
range. For geotiff digital elevation maps, which use 16-bit numbers as previously mentioned, that
|
||||||
|
maximum possible value is 65,535. But unlike a geotiff, a generic heightmap has no exact
|
||||||
|
correspondence with anything else; it's not necessarily an accurate dataset, and won't include the
|
||||||
|
GIS stuff like what projection it is, what the coordinate bounding boxes are, etc. But it *is*
|
||||||
|
useful for turning into a mesh.
|
||||||
|
|
||||||
|
And here I get to the [final GDAL tool](https://gdal.org/programs/gdal_translate.html) I used,
|
||||||
|
`gdal_translate`. This is something that can read in a geotiff, and write out a different image
|
||||||
|
format. When in doubt, [PNG](https://en.wikipedia.org/wiki/Portable_Network_Graphics) is fine, I
|
||||||
|
always say. It's a simple format that nearly everything can read, and is compressed so it should be
|
||||||
|
a much smaller file on disk, even if it's the same number of pixels. Smaller file size is always
|
||||||
|
easier.
|
||||||
|
|
||||||
|
> `gdal_translate -of PNG -ot UInt16 -scale -130 4412 0 65535 masked_ca_topo.tif heightmap.png`
|
||||||
|
|
||||||
|
Like we saw <a href="#minmax-height">earlier</a>, the lowest point we had in our data was -130
|
||||||
|
meters, and the highest was 4,412. The `-scale -130 4412 0 65535` arguments are saying, "anything
|
||||||
|
with a height of -130 should be totally dark, and anything with a height of 4,412 should be as
|
||||||
|
bright as possible, and anything in-between should be set proportionately." This is a linear
|
||||||
|
mapping, and preserves the relationships between vertical features (that is, if something is twice
|
||||||
|
as tall as another thing, that will still be true after being scaled), so in a sense, it's
|
||||||
|
"accurate", but would it be good, was the question (*note: more foreshadowing*).
|
||||||
|
|
||||||
|
Once I had the PNG file, I used the [ImageMagick](https://imagemagick.org/script/convert.php) `convert`
|
||||||
|
command to resize the file down to a reasonable size. Finally, I had something I could use to make a
|
||||||
|
mesh:
|
||||||
|
|
||||||
|
![the heightmap made by doing a linear scale of height to brightness][scaled_heightmap]
|
||||||
|
|
||||||
|
Pretty cool, right? I thought so! The detail is pretty great; that bright spot near the top is
|
||||||
|
[Mt. Shasta](https://en.wikipedia.org/wiki/Mount_Shasta), for example;
|
||||||
|
[Mt. Whitney](https://en.wikipedia.org/wiki/Mount_Whitney) is slightly taller, but not by much, and
|
||||||
|
is part of a range so it doesn't stand out the way Shasta does. It was time to start making some 3D
|
||||||
|
geometry with the heightmap[^time-to-mesh]!
|
||||||
|
|
||||||
|
# A mesh is born
|
||||||
|
|
||||||
|
My next step was to figure out how exactly to turn that heightmap into a mesh. Some searching
|
||||||
|
assured me that [Blender](https://www.blender.org/), a free and open source 3D modeling package that
|
||||||
|
I'd dabbled with before, would work well. For example, here's a pretty high-level walk-through of
|
||||||
|
[how to use a heightmap to displace a mesh
|
||||||
|
plane](https://alanedwardes.com/blog/posts/create-meshes-from-height-maps-using-blender/), which is
|
||||||
|
almost exactly what I first wanted to do. Before too long, I had something that looked like this:
|
||||||
|
|
||||||
|
![a very pointy california topo][pointy-california]
|
||||||
|
|
||||||
|
At first glance, it looks OK, but there's so. much. detail. And it's very, very pointy; it just
|
||||||
|
looks jagged. Check out this close-up detail of Mt. Shasta:
|
||||||
|
|
||||||
|
![a very pointy mt shasta][pointy-shasta]
|
||||||
|
*<center><sup><sub>witch's hat-assed mountain</sub></sup></center>*
|
||||||
|
|
||||||
|
You can tell it wounldn't be nice to touch, and being able to run your fingers along the shape
|
||||||
|
was a huge part of the appeal of having the physical object.
|
||||||
|
|
||||||
|
## Back to the realm of images
|
||||||
|
|
||||||
|
Given that it seemed like there were at least a couple semi-related problems from too much detail,
|
||||||
|
my first instinct was to blur the heightmap, and then reduce the size of it. I used the ImageMagick
|
||||||
|
`convert` command again [to blur the image](https://legacy.imagemagick.org/Usage/blur/) a couple
|
||||||
|
rounds, and then resized it down:
|
||||||
|
|
||||||
|
![first attempt at blurring the heightmap][blurry-linear-hm]
|
||||||
|
|
||||||
|
A little better, but still not great. A few more rounds of blurring and shrinking got me
|
||||||
|
this:
|
||||||
|
|
||||||
|
![second round of blurring the heightmap][blurry-linear-hm-smaller]
|
||||||
|
|
||||||
|
With that version, I was able to produce some reasonable-looking geometry in Blender:
|
||||||
|
|
||||||
|
![a slightly smoother mesh][smoother-california-mesh]
|
||||||
|
|
||||||
|
Or so I thought.
|
||||||
|
|
||||||
|
It may have been smoother, it was still very pointy. A lot of the high-frequency detail has been removed, which
|
||||||
|
means it's not rough and jagged, but Shasta still looks ridiculous.
|
||||||
|
|
||||||
|
## A matter of scale
|
||||||
|
|
||||||
|
The problem was that I was doing a linear scaling of the height of features in the data, and the
|
||||||
|
required factors were so enormous that it distorted the geometry in an ugly way.
|
||||||
|
|
||||||
|
The State of California is very large, but for the sake of argument, let's pretend it's exactly 700
|
||||||
|
miles tall, from the southern tip to the northern border's latitude, going straight north; the real
|
||||||
|
length is close to that. Also for the sake of argument, let's say that the tallest mountain is 3
|
||||||
|
miles tall; the actual height is a little less than that, but that's OK, the argument holds more
|
||||||
|
strongly at lower height. That means the ratio of height to length is 3/700, or 0.0043-ish.
|
||||||
|
|
||||||
|
If you had a physically accurate topographic carving of California that was a foot long, the tallest
|
||||||
|
peak on the carving would be 0.0043 feet high, which is about 1/20th of an inch, or about 1.3
|
||||||
|
millimeters. You'd probably be able to see and feel where Shasta was, and see that there was a faint
|
||||||
|
line from the Sierra Nevadas, but that would be it. That's why it's so hard to see the details in
|
||||||
|
the <a href="#raw-dem">raw elevation data</a> geotiff.
|
||||||
|
|
||||||
|
In order to be able to see any detail, and to meet expectations about what a topographic carving is
|
||||||
|
supposed to look like, the height of the highest peaks needs to be scaled up by something like
|
||||||
|
10-20x. My problem was that I was doing a linear scale; I was making *everything* 10-20x taller than
|
||||||
|
it "should" be, which was causing everything to look stretched and weird.
|
||||||
|
|
||||||
|
And even with that amount of exaggeration, some low-elevation features were still not showing
|
||||||
|
up. For example, [Sutter Buttes, a 2,000-foot tall mound in the Sacramento
|
||||||
|
Valley](https://en.wikipedia.org/wiki/Sutter_Buttes), which is faintly visible in the heightmap, is
|
||||||
|
almost not there in the resulting mesh. It's about 1/7th the height of Shasta, which is not all that
|
||||||
|
much, when Shasta was represented by something 0.75 inches tall.
|
||||||
|
|
||||||
|
What I really needed was some non-linear way to scale the height, some way to exaggerate lower
|
||||||
|
altitudes more than higher ones. The highest points should stay as high as they were; they determine
|
||||||
|
the ultimate overall height, but lower points should be given a relative boost. An easy way to do
|
||||||
|
this is to take some fractional root (raise a number to a power between 0.0 and 1.0) of the linear
|
||||||
|
scaling factor, and use that new value instead. For example, the graph of *x* raised to the
|
||||||
|
0.41th[^zero-forty-oneth] power looks like this:
|
||||||
|
|
||||||
|
![y = x^0.41 between 0 and 1][exp-plot]
|
||||||
|
|
||||||
|
Notice how values *at* 0 and 1 are the same as they would be with linear scaling, values *near* 0
|
||||||
|
rapidly get scaled upward, and by the time you get near 1, it looks almost linear again. The linear
|
||||||
|
scaling function we'd initially used would just look like a straight line from the lower left corner
|
||||||
|
to the upper right.
|
||||||
|
|
||||||
|
Luckily, `gdal_translate` has an option to do this kind of scaling, so it was a quick
|
||||||
|
|
||||||
|
> `gdal_translate -of PNG -ot UInt16 -scale -130 4412 0 65535 -exponent 0.41 ca_topo.tif
|
||||||
|
exponentially_scaled_heightmap.png`
|
||||||
|
|
||||||
|
and a couple rounds of blurring, and I had the following heightmap:
|
||||||
|
|
||||||
|
![a non-linearly scaled heightmap][lo-rez_exp_blurred]
|
||||||
|
|
||||||
|
which resulted in a mesh that looked something like this inside Blender:
|
||||||
|
|
||||||
|
![3D viewport in Blender showing a topo-displaced mesh that looks like
|
||||||
|
California][exp-scaled-blending]
|
||||||
|
|
||||||
|
Doesn't that look nicer? Notice how a bunch of things that were nearly invisible before, like Sutter
|
||||||
|
Buttes, are easily visible. Check out the [Channel
|
||||||
|
Islands](https://en.wikipedia.org/wiki/Channel_Islands_(California)) now plain as day! I was feeling
|
||||||
|
pretty good about having this whole thing wrapped up shortly, only a little late for the birthday.
|
||||||
|
|
||||||
|
# A dark age
|
||||||
|
|
||||||
|
What followed was two frustrating weeks attempting to get a manifold mesh out of Blender that was
|
||||||
|
small enough, by which I mean number of vertices and edges, so that
|
||||||
|
[FreeCAD](https://www.freecadweb.org/) could turn it into an STP file. Unfortunately, FreeCAD is not
|
||||||
|
a good tool for doing fancy things with meshes, like creating them from a heightmap, so I had to use
|
||||||
|
two different tools.
|
||||||
|
|
||||||
|
This also meant that I would run into surprising limits when going between them. Let me explain. I'd
|
||||||
|
get a mesh in Blender, export it to a neutral mesh format like
|
||||||
|
[OBJ](https://en.wikipedia.org/wiki/Wavefront_.obj_file) that both programs understand well, and it
|
||||||
|
would be a 60 megabyte file. My computer has 32 **giga**bytes, more than 500 times more memory than
|
||||||
|
that, so you'd think it would not be a problem.
|
||||||
|
|
||||||
|
The act of asking FreeCAD to import that OBJ file as a *mesh*, and not even as a solid body, caused
|
||||||
|
the memory use to go to 21 gigabytes. This is a lot, but the computer still had plenty of room left
|
||||||
|
in memory for things like "responding to the keyboard and mouse" or "redrawing the
|
||||||
|
screen". Everything at this point is still perfectly usable.
|
||||||
|
|
||||||
|
When I attempted to convert that mesh into a solid body, though, memory use ballooned up to
|
||||||
|
encompass all available RAM, and my system immediately came to a nearly imperceptible crawl until my
|
||||||
|
frantic `ctrl-c`s were finally registered by the [signal
|
||||||
|
handlers](https://www.gnu.org/software/libc/manual/html_node/Termination-Signals.html) in FreeCAD
|
||||||
|
before I could use it again. This happened *a lot*. At last, <a href="#prophecy">the prophecy</a>
|
||||||
|
had come to pass.
|
||||||
|
|
||||||
|
I went through many rounds of attempting to clean up the mesh and reduce its complexity, but I don't
|
||||||
|
have many notes or intermediate artifacts from this time. A lot of that was being a beginner at both
|
||||||
|
Blender **and** FreeCAD, though there's so much educational material that I was rarely held back by
|
||||||
|
not knowing how to do a particular thing inside each program. A lot was inexperience in the domain;
|
||||||
|
I did not know how much detail was essential, and I did not have a lot of experience with digital
|
||||||
|
modeling in the first place. The workflow was very manual, and cycles were fairly long, which made
|
||||||
|
it hard to try a bunch of things in quick succession as experiments. All those things and more
|
||||||
|
conspired to make this portion of the process a total slog with very little to show off.
|
||||||
|
|
||||||
|
# Test prints
|
||||||
|
|
||||||
|
Eventually, after a couple weeks of trying and failing to get something into FreeCAD that I could then
|
||||||
|
work with (like merging it with a thick base and trimming that base to follow the shape of the
|
||||||
|
state), I had had enough. I was just going to send the shop an STL file and forget about trying to
|
||||||
|
get an STP file. I have some notes from then, right after I'd started my first test print:
|
||||||
|
|
||||||
|
> I'm finally printing something out. I've given up on converting it into [something CAD-friendly];
|
||||||
|
> it seems this is a Hard Problem, but I'm not sure why. My goal with doing that was to give a
|
||||||
|
> CAD-friendly file to a local CNC milling shop, per their request, since when I suggested a
|
||||||
|
> mesh-based file (STL), the guy was like "I can't do much manipulation with that to make it more
|
||||||
|
> manufacturable, so a real CAD file would be best".
|
||||||
|
>
|
||||||
|
> But at least with an STL file, I can print it myself. So that's going now, we'll see how it turns
|
||||||
|
> out in no less than eight hours.
|
||||||
|
>
|
||||||
|
> I haven't really done anything else with my computer besides this for a while.
|
||||||
|
|
||||||
|
When that print was done, here's what it looked like:
|
||||||
|
|
||||||
|
![a piece of literal dogshit][crappy_test_print]
|
||||||
|
*<center><sup><sub>don't look at me, I'm hideous</sub></sup></center>*
|
||||||
|
|
||||||
|
In case you were not revolted enough, then please allow me to direct your gaze toward this eldritch
|
||||||
|
abomination:
|
||||||
|
|
||||||
|
![close-up of extremely bad print results][crappy-close-up]
|
||||||
|
*<center><sup><sub>what did I just say about looking at me</sub></sup></center>*
|
||||||
|
|
||||||
|
As bad as it looked, it felt even worse to touch. Setting aside the hideous base with its weird
|
||||||
|
visual artifacts due to those areas not being a single flat polygon, but rather several polygons
|
||||||
|
that were not parallel, there was still just too much high-frequency detail in the terrain, and it
|
||||||
|
was a total mismatch with the 3D printed medium.
|
||||||
|
|
||||||
|
The real thing was going to be carved out of wood by a [CNC
|
||||||
|
mill](https://all3dp.com/2/what-is-cnc-milling-simply-explained/), which uses a drill-like component
|
||||||
|
to carve away pieces of the material you're working with. This means that there's a tiny spinning
|
||||||
|
bit with a definite, finite size, and any detail in the model smaller than the end of that spinning
|
||||||
|
bit would likely be impossible to carve with it. This meant that all that high-frequency detail was
|
||||||
|
not only ugly, it was also completely unnecessary.
|
||||||
|
|
||||||
|
## Just try harder
|
||||||
|
|
||||||
|
I was very eager to get something into the CNC shop's hands at this point, but I also knew that this
|
||||||
|
model was not acceptable. So, I resolved to brutally simplify the geometry until I got something
|
||||||
|
that was workable inside FreeCAD.
|
||||||
|
|
||||||
|
First off, I made the heightmap even smaller, only 500 pixels wide. Fewer pixels means fewer details
|
||||||
|
for turning into a displacement map for a mesh! I also removed the Channel Islands from the
|
||||||
|
heightmap, resulting in this final mesh displacement input:
|
||||||
|
|
||||||
|
![it's the final heightmap][final-heightmap]
|
||||||
|
*<center><sup><sub>it's the final heightmap (doot-doot-doot-doot,
|
||||||
|
doot-doot-doot-doot-doot)</sub></sup></center>*
|
||||||
|
|
||||||
|
Inside Blender, I'd gotten quite proficient at running through the steps to generate a mesh from a
|
||||||
|
heightmap, and once I'd done that, I went through several rounds of [mesh
|
||||||
|
simplification](https://graphics.stanford.edu/courses/cs468-10-fall/LectureSlides/08_Simplification.pdf);
|
||||||
|
the geometry was practically homeopathic.
|
||||||
|
|
||||||
|
![the final model in blender][final-model]
|
||||||
|
*<center><sup><sub>by the principles of homeopathy, the fewer the vertices, the more potent the mesh</sub></sup></center>*
|
||||||
|
|
||||||
|
Check out this close-up of Mt Shasta:
|
||||||
|
|
||||||
|
![close-up of Shasta in the final model][final-shasta]
|
||||||
|
*<center><sup><sub>a chonkier, less lonesome Mt Shasta</sub></sup></center>*
|
||||||
|
|
||||||
|
Present, but not obnoxious. I printed out a second test print to make sure it looked as good in
|
||||||
|
physical reality:
|
||||||
|
|
||||||
|
![the final test print of the final model][final-print]
|
||||||
|
|
||||||
|
Verdict: yes. If you want, you can visit
|
||||||
|
<https://www.printables.com/model/240867-topographic-california> and download the 3D printer file to
|
||||||
|
print it yourself at home. If you don't have a 3D printer, you can still look at and interact with a
|
||||||
|
3D model of it in the browser on that site, so it's still kind of neat. A couple different strangers
|
||||||
|
uploaded pictures of their prints of it, which I thought was cool!
|
||||||
|
|
||||||
|
I brought the mesh into FreeCAD and finally was able to create the STP[^fancy-iges] file the shop had
|
||||||
|
asked for, a mere twenty-five days after I'd last spoken with them.
|
||||||
|
|
||||||
|
# Final cut
|
||||||
|
|
||||||
|
I emailed the file to the shop, and said,
|
||||||
|
|
||||||
|
> As modeled, there's probably more high-frequency detail in the mountains than is necessary, as I'm
|
||||||
|
> going for something that feels nice to the touch so smoother is better. It's also modeled at a
|
||||||
|
> slightly larger scale than necessary, though not too far off (it's 500x577mm, and I'm interested
|
||||||
|
> in the 400-500mm range for width; the relief height is in the 20-30mm range depending on scale). I
|
||||||
|
> was imagining it would be carved with contour cuts in some thick nice ply, though I'm happy to
|
||||||
|
> hear better ideas; I have literally no experience with making something like this.
|
||||||
|
|
||||||
|
The shop came back with,
|
||||||
|
|
||||||
|
> I can’t smooth out the cuts, I can only cut what is there. That being said, if I use a rounded cutter, it will round out the valleys but not the peaks as it won’t go into areas that it can’t reach.
|
||||||
|
>
|
||||||
|
> Hope that makes sense.
|
||||||
|
>
|
||||||
|
> Let me know if this will work for you or not. If you think it will, I will try to program the toolpaths and see what it will look like.
|
||||||
|
|
||||||
|
I definitely didn't want to lose the sharp seams in the bottoms of the valleys!
|
||||||
|
|
||||||
|
> Me: I guess what I was really saying is that if some detail is lost due to using a larger cutting
|
||||||
|
> head that's probably fine. I wouldn't necessarily want the valleys to be made more concave than
|
||||||
|
> they already are, though. Does that make sense?
|
||||||
|
>
|
||||||
|
> Shop: Yes, that makes sense. I can use a Vee cutter and it will cut the sharp edges in the
|
||||||
|
> valleys."
|
||||||
|
|
||||||
|
It felt nice to be understood! Next came the issue of cost:
|
||||||
|
|
||||||
|
> I ran the numbers on both sizes using a .01” step-over cut, meaning that is how far apart the
|
||||||
|
> finish cuts will be from each other.
|
||||||
|
>
|
||||||
|
> You will probably see some tool marks depending on what type of material is used.
|
||||||
|
>
|
||||||
|
> The larger one is coming in at around $850.00 and the 12” one at $350.00.
|
||||||
|
>
|
||||||
|
> I can go tighter, say .005” step-over and it will probably not show many marks but I won’t know
|
||||||
|
> until I run it.
|
||||||
|
>
|
||||||
|
> If I do that it will double the cut time so close to doubling the price.
|
||||||
|
|
||||||
|
One of the things that my wife had said she wanted to do with the carving of California was sand and
|
||||||
|
finish it herself, so the coarser 0.01-inch step-over cut was not really a problem. Even the
|
||||||
|
0.005-inch cut would still require a final sanding before staining or sealing.
|
||||||
|
|
||||||
|
The "larger one" the shop referred to was for a 20-inch wide carving, which would be way too huge
|
||||||
|
anyway; 12 inches was fine. Still, $350 was at the top of what I had hoped/expected to spend. I
|
||||||
|
hoped it was worth it!
|
||||||
|
|
||||||
|
After a few more back-and-forths and days, I got a message from the shop saying it was ready. They
|
||||||
|
also said,
|
||||||
|
|
||||||
|
> I decided to run these with half the original step-over, which means it takes twice as long but
|
||||||
|
> the finish is almost smooth. I think you will be pleased with it.
|
||||||
|
|
||||||
|
Whoa! This meant he had used the 0.005-inch cutting resolution, and the job had taken twice as long
|
||||||
|
as originally quoted. Like the [kind and generous tailor from *The Hudsucker
|
||||||
|
Proxy*](https://getyarn.io/yarn-clip/0f78e11f-df94-42e4-8bdf-b11c39326f7c), he had given me the
|
||||||
|
double-stitch anyway, even though I had insisted that single stitch was fine. I was very excited and
|
||||||
|
grateful, and couldn't wait to see it.
|
||||||
|
|
||||||
|
## Pics or it didn't happen
|
||||||
|
|
||||||
|
When I got there, it was almost exactly what I had imagined and hoped it would be. Obviously, you've
|
||||||
|
seen the photo at the top of the page, but please enjoy this CNC-carved topographic California porn.
|
||||||
|
|
||||||
|
![portrait of the whole state][wood-portrait]
|
||||||
|
*<center><sup><sub>some nice soft lighting</sub></sup></center>*
|
||||||
|
|
||||||
|
![our old friend, the Sutter Buttes][wood-buttes]
|
||||||
|
*<center><sup><sub>sutter buttes, we meet again</sub></sup></center>*
|
||||||
|
|
||||||
|
![down low view, like the shot from Blender][wood-blender]
|
||||||
|
*<center><sup><sub>recognize this angle, from blender?</sub></sup></center>*
|
||||||
|
|
||||||
|
![close up of Shasta][wood-shasta]
|
||||||
|
*<center><sup><sub>lookin' good, shasta</sub></sup></center>*
|
||||||
|
|
||||||
|
I wasn't the only one pleased with it; my wife was delighted when she saw it.
|
||||||
|
|
||||||
|
MISSION ACCOMPLISHED, HAPPY *<sub>belated</sub>* BIRTHDAY!
|
||||||
|
|
||||||
|
# Thank yous
|
||||||
|
|
||||||
|
Obviously, I have tons of people to thank for their help with this, either directly or
|
||||||
|
indirectly. First and foremost, my wife, for everything, but especially for the inspiration and also
|
||||||
|
patience with me during this process.
|
||||||
|
|
||||||
|
A close second for this project goes to Steve at [Triumph CNC](https://www.triumphcnc.com/). He
|
||||||
|
asked me what I was going to do with it, and when I said give it to my wife as a gift, he said, "Oh,
|
||||||
|
that's great! I feel even better about using the smaller step-over now." If you need some CNC
|
||||||
|
milling done in Los Angeles, maybe give them a call!
|
||||||
|
|
||||||
|
Along the way during this journey I got a lot of feedback and suggestions from friends and
|
||||||
|
colleagues, so thank you, 'rades[^short-for-comrades]!
|
||||||
|
|
||||||
|
Of course, this would all have been unthinkably difficult not so long ago, but thanks to things like
|
||||||
|
NASA's missions and public GIS datasets, almost anyone can do something like this.
|
||||||
|
|
||||||
|
And not just public, government data and organizations, but private, passion-driven free software
|
||||||
|
projects like Blender and FreeCAD that rival functionality found in multi-thousand-dollar commercial
|
||||||
|
packages. I'm in awe of their accomplishments; they are true wonders of the modern world.
|
||||||
|
|
||||||
|
# Things I learned, and some lessons
|
||||||
|
|
||||||
|
I said early on that I knew basically nothing about any of this, and that was true. I had had some
|
||||||
|
earlier casual experience with both Blender and FreeCAD, and many, many years ago I had taken a
|
||||||
|
semester of engineering drafting my first year of college. But I knew basically nothing about GIS,
|
||||||
|
about the different map projections, about shapefiles, about any of the tools or jargon. Likewise,
|
||||||
|
I have no experience or instruction in any kind of CNC milling; my scant 3D printing experience
|
||||||
|
doesn't really apply.
|
||||||
|
|
||||||
|
This article is as close as I could get to serializing nearly everything I had to learn and do to
|
||||||
|
create that carving.
|
||||||
|
|
||||||
|
And at the time it was happening, it didn't feel like I was retaining all of it, or that I really,
|
||||||
|
truly understood everything I had done; I was hurrying as fast as I could toward a particular
|
||||||
|
goal. But in the course of writing this, I was basically retracing my steps, and found that I really
|
||||||
|
did have a pretty good handle on it. One of my favorite things to do is learn stuff, so this was a
|
||||||
|
great outcome for me!
|
||||||
|
|
||||||
|
If I were to do this again, or if I were starting for the first time with the benefit of someone
|
||||||
|
else's experience, there are obviously a few things I would do differently. First off, I'd see if I
|
||||||
|
could find a lower-resolution dataset. One arc second is way overkill; at the scale of a topo
|
||||||
|
carving that you can hold in your hands, a resolution of several arc *minutes* (one arc minute is
|
||||||
|
one [nautical mile](https://en.wikipedia.org/wiki/Nautical_mile), which is about 1.1 regular
|
||||||
|
(terrestrial?) miles) would probably be enough.
|
||||||
|
|
||||||
|
I'd also use the USGS [national map downloader](https://apps.nationalmap.gov/downloader/) site to
|
||||||
|
get just the California data; you can upload a shapefile and it'll give you back a masked
|
||||||
|
geotiff. If I had started from that, it would have shaved at least two weeks off the time it took me
|
||||||
|
to make the thing; I could have jumped immediately into being frustrated in Blender and FreeCAD.
|
||||||
|
|
||||||
|
Speaking of, I wish I could give some guidance on effectively using Blender and FreeCAD, but that's
|
||||||
|
a journey only you can plot. That's probably not true, but I still feel like a doofus in those
|
||||||
|
tools, so I don't feel like it's worth anyone's time to hear from me about them. Good luck in your
|
||||||
|
quest!
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
[wood-portrait]: wood-full-portrait.jpg "portrait shot of the whole carving"
|
||||||
|
|
||||||
|
[wood-buttes]: wood_sutter_buttes.jpg "our old friend, the Sutter Buttes"
|
||||||
|
|
||||||
|
[wood-blender]: wooden_like_blender.jpg "down low view, like the shot from Blender"
|
||||||
|
|
||||||
|
[wood-shasta]: wooden_shasta_close-up.jpg "close up of Shasta"
|
||||||
|
|
||||||
|
[final-print]: final_printed.jpg "the final test print of the final model"
|
||||||
|
|
||||||
|
[final-shasta]: final_shasta.png "close-up of Shasta in the final model"
|
||||||
|
|
||||||
|
[final-model]: final_ca_topo_blend.png "the final model in Blender"
|
||||||
|
|
||||||
|
[final-heightmap]: final_heightmap.png "it's the final heightmap (sick synthesizer riff blasts)"
|
||||||
|
|
||||||
|
[crappy-close-up]: crappy_test_print_close_up.jpg "close-up of extremely bad print results"
|
||||||
|
|
||||||
|
[main_image]: wood_ca_on_table.jpg "A plywood slab carved with CNC into a topographic representation of California"
|
||||||
|
|
||||||
|
[programmers_creed]: /images/programmers_creed.jpg "jfk overlaid with the programmer's creed: we do these things not because they are easy, but because we thought they were going to be easy"
|
||||||
|
|
||||||
|
[meshy-cube]: meshy-cube.png "an overly-complicated mesh of a cube"
|
||||||
|
|
||||||
|
[geotiff-files]: geotiff-files.png "the input geotiff files and the resulting 'ca_topo.tif' output file, which is 2.7 gigabytes"
|
||||||
|
|
||||||
|
[small_ca_topo]: small_ca_topo.png "a 'raw' heightmap of california and parts of nevada, arizona, and mexico"
|
||||||
|
|
||||||
|
[scaled_heightmap]: scaled_heightmap.png "the heightmap made by doing a linear mapping of height to brightness"
|
||||||
|
|
||||||
|
[pointy-california]: pointy_california_blending.png "the displaced mesh plane made from the first heightmap"
|
||||||
|
|
||||||
|
[pointy-shasta]: pointy_shasta_close-up.png "a very pointy mt shasta"
|
||||||
|
|
||||||
|
[blurry-linear-hm]: blurred_scaled_hm_3.png "first attempt at blurred heightmap"
|
||||||
|
|
||||||
|
[blurry-linear-hm-smaller]: lo-rez_blurred_hm3.png "second round of blurring the heightmap"
|
||||||
|
|
||||||
|
[smoother-california-mesh]: blending_california.png "slightly smoother mesh in blender"
|
||||||
|
|
||||||
|
[exp-plot]: exponential_plot.png "a graph of the function `y = x^0.41` between 0 and 1"
|
||||||
|
|
||||||
|
[lo-rez_exp_blurred]: lo-rez_exp_blurred.png "nearly final heightmap, using exponential scaling to exaggerate lower altitudes"
|
||||||
|
|
||||||
|
[exp-scaled-blending]: non-linear_scaling_of_ca_height_data.png "You can see how Shasta doesn't stick out so much when the other hills are brought up a bit relatively speaking"
|
||||||
|
|
||||||
|
[crappy_test_print]: ca_topo_crappy_test_print.png "a piece of literal dogshit"
|
||||||
|
|
||||||
|
[^introspection]: The conclusion upon examination was, "I just wasn't thinking".
|
||||||
|
|
||||||
|
[^math-computers]: I'm pretty sure this is more "represent shapes with math" than with a computer, but
|
||||||
|
the computer is helping us do the math and it's more relatable.
|
||||||
|
|
||||||
|
[^manifold_holes]: I *think* you could also have a 2D sheet with a hole cut out of it represented by
|
||||||
|
a mesh that is manifold, as long as the connectivity was correct in terms of how many shared edges
|
||||||
|
and vertices there were (though this would not be a valid STL file). Imagine a cloth sheet with a
|
||||||
|
hole cut out in the middle, and the edge of the hole hemmed or otherwise "sealed", which is then a
|
||||||
|
*manifold boundary*. See [this powerpoint
|
||||||
|
deck](https://pages.mtu.edu/~shene/COURSES/cs3621/SLIDES/Mesh.pdf) for a pretty math-y overview of
|
||||||
|
"mesh basics" (but not really that basic, that's just academics trolling us, don't let it bother
|
||||||
|
you). If I'm wrong about a 2D sheet with a hole being possibly manifold, I invite correction!
|
||||||
|
|
||||||
|
[^chekhovs-ram]: A textbook example of *Chekhov's Scarce Computational Resource*.
|
||||||
|
|
||||||
|
[^wgs-ellipsoid]: Technically, it's an arc along the WGS84 ellipsoid, which is a perfectly smooth
|
||||||
|
*smushed* sphere, which more closely matches the real shape of the Earth vs. a perfectly round sphere.
|
||||||
|
|
||||||
|
[^16-bit-ints]: Each pixel is 16 bits, so the possible values are from 0 to 2^16 - 1. 2^16 is 65536,
|
||||||
|
so there you go.
|
||||||
|
|
||||||
|
[^the-real-treasure-is-the-gd-treasure]: A friend posited at one point that my circuitous journey to
|
||||||
|
the end product was the point, but I assured him that every step I took was trying to get to the end
|
||||||
|
product as quickly and straightforwardly as possible. Still, I did in fact wind up learning a whole
|
||||||
|
shitload of stuff, which is nice, I GUESS.
|
||||||
|
|
||||||
|
[^zero-pixel-value]: I'm not actually sure what the "0" altitude pixel value is. It can't actually
|
||||||
|
be 0, because the numbers in the file can't be negative, and there are deep valleys on the earth's
|
||||||
|
surface. But it's clearly not that high a value, otherwise, when you viewed the geotiff as an image,
|
||||||
|
it would be closer to white or gray than black.
|
||||||
|
|
||||||
|
[^time-to-mesh]: Based on the timestamps of the files in the directory where I was working on this
|
||||||
|
project, it took about ten days from the time I first downloaded a geotiff dataset to having the
|
||||||
|
heightmap shown above, so you can imagine all the dead-ends I went down and did not share in this
|
||||||
|
write-up.
|
||||||
|
|
||||||
|
[^zero-forty-oneth]: I think this was just the first fractional value that I tried, and it was fine.
|
||||||
|
|
||||||
|
[^fancy-iges]: I actually produced an [IGES](https://en.wikipedia.org/wiki/IGES) file; STP is basically
|
||||||
|
fancy IGES, and my model didn't include the extra info in STP files like material and color anyway.
|
||||||
|
|
||||||
|
[^short-for-comrades]: pronounced "rads", and is short for "comrades".
|
After Width: | Height: | Size: 733 KiB |
After Width: | Height: | Size: 764 KiB |
BIN
content/sundries/a-thoroughly-digital-artifact/meshy-cube.png
Normal file
After Width: | Height: | Size: 199 KiB |
After Width: | Height: | Size: 564 KiB |
143
content/sundries/a-thoroughly-digital-artifact/notes.txt
Normal file
|
@ -0,0 +1,143 @@
|
||||||
|
inital comms with CNC shop:
|
||||||
|
------------------------------
|
||||||
|
Me: "project description": A relief map of California, carved from wood. Height exaggerated enough
|
||||||
|
to visibly discern the Santa Monica mountains. I can provide an STL file if needed.
|
||||||
|
|
||||||
|
Shop: STL can work but I can’t manipulate it, which could save some money. If possible can it be
|
||||||
|
exported to an .igs or .iges or .stp format?
|
||||||
|
|
||||||
|
Me: Yeah, STP should be no problem. Can you give a rough estimate of the cost for 1x2-foot relief carving?
|
||||||
|
|
||||||
|
Shop: Without seeing the drawings, I can’t give even a close price but in the past they range from a
|
||||||
|
few hundred dollars to several thousand dollars.
|
||||||
|
|
||||||
|
Me: That's totally fair! I'll get you some files in a few days.
|
||||||
|
------------------------------
|
||||||
|
|
||||||
|
next comms with shop three weeks later:
|
||||||
|
------------------------------
|
||||||
|
|
||||||
|
Hi Steve, I'm sorry for taking so long to get back to you! I had a harder time producing the IGES
|
||||||
|
file than I thought I would, but I think this should be OK:
|
||||||
|
|
||||||
|
(snip url to file)
|
||||||
|
|
||||||
|
It's 51 megabytes, otherwise I'd attach here.
|
||||||
|
|
||||||
|
As modeled, there's probably more high-frequency detail in the mountains than is necessary, as I'm
|
||||||
|
going for something that feels nice to the touch so smoother is better. It's also modeled at a
|
||||||
|
slightly larger scale than necessary, though not too far off (it's 500x577mm, and I'm interested in
|
||||||
|
the 400-500mm range for width; the relief height is in the 20-30mm range depending on scale). I was
|
||||||
|
imagining it would be carved with contour cuts in some thick nice ply, though I'm happy to hear
|
||||||
|
better ideas; I have literally no experience with making something like this.
|
||||||
|
|
||||||
|
(NOTE: go back to email thread and summarize the back-and-forth over tooling)
|
||||||
|
---------------------------------
|
||||||
|
|
||||||
|
Note that the shop did the extra work anyway just because they were nice, and that he was glad when
|
||||||
|
I told him it was a gift for my wife.
|
||||||
|
|
||||||
|
|
||||||
|
Zulip dump from 10 days after initial contact:
|
||||||
|
-----------------------------------
|
||||||
|
It IS Mt. Shasta!
|
||||||
|
|
||||||
|
After I made the mosaic out of the tiles I downloaded to cover the area, I masked it with an outline
|
||||||
|
of the state that I downloaded from a California gov geo site, then used a program called
|
||||||
|
gdal_translate to turn the image, a "geotiff" file with height data encoded, into that heightmap png
|
||||||
|
with the lowest value mapped to 0 and the highest to maxint.
|
||||||
|
|
||||||
|
I also had to reproject the geotiff with the height data into the same coordinate system as the
|
||||||
|
state outline was in. The height data was in a system using lat/long called "EPSG:4326", while the
|
||||||
|
state outline was made from line segments with 2d vertices in a projected coordinate system called
|
||||||
|
"EPSG:3857" with units of "meters". 3857 is "web Mercator", and is the coordinate system used by
|
||||||
|
Google and Open Street Map for their map tiles and other shapes.
|
||||||
|
|
||||||
|
It may or may not be surprising that cartography is very complicated!
|
||||||
|
|
||||||
|
My next step is to turn this heightmap into solid geometry that I can 3d print and/or send to a
|
||||||
|
local CNC shop to have them carve a relief of California out of wood, which is close to the final
|
||||||
|
step of producing an artifact as a present for my partner.
|
||||||
|
|
||||||
|
There are a bunch of python packages for working in this domain, but they're all just wrappers
|
||||||
|
around various GDAL libraries or tools.
|
||||||
|
|
||||||
|
The raw topo data I got from
|
||||||
|
https://portal.opentopography.org/raster?opentopoID=OTSRTM.082015.4326.1 (that was the epsg 4326
|
||||||
|
geocoded tiff; actually, several of them because you can only download up to 450km^2 at a time,
|
||||||
|
hence having to mosaic them with the gdal_merge.py command (the '.py' is in the name of the command
|
||||||
|
that gets installed when you do apt install gdal-bin)), then use gdalwarp to re-project to 3857,
|
||||||
|
then I had to write a python program to mask it for some reason, then gdal_translate (no .py on that
|
||||||
|
one, but they're all just python scripts) to convert to the png heightmap. I'm leaving out a couple
|
||||||
|
details in the workflow, but that's the main shape of it.
|
||||||
|
|
||||||
|
OK, actually, now that all that context is established, here's the actual command that produced that
|
||||||
|
file from the geocoded tiff:
|
||||||
|
|
||||||
|
gdal_translate -of PNG -ot UInt16 -scale -130 4412 0 65535 cropped_ca_topo.tif heightmap_cropped.png
|
||||||
|
|
||||||
|
and then I used convert (the imagemagick program) to scale the png from 33,311x38,434 to
|
||||||
|
2,000x2,2308 pixels.
|
||||||
|
|
||||||
|
the -scale -130 4412 0 65535 is mapping the height data min/max to the png whiteness in the output
|
||||||
|
file.
|
||||||
|
---------------------------------------
|
||||||
|
|
||||||
|
Zulip musings from a few days after that, still working on the heightmap:
|
||||||
|
---------------------------------------
|
||||||
|
(re: non-linear scaling of height to reduce pointiness)
|
||||||
|
ok, it was easier than I thought it would be. gdal_translate has a -exponent flag you can use with
|
||||||
|
-scale, so I remade the heightmap with an exponential scaling, using 0.41 as the exponent.
|
||||||
|
|
||||||
|
funny enough, I'm still working on this, since even when I drastically scale the size of the mesh
|
||||||
|
down in Blender (which I export to OBJ for import by FreeCAD), doing anything like modelling (eg,
|
||||||
|
extruding downward or joining with a solid base, or cutting the shape so it's CA-shaped and not a
|
||||||
|
rectangle) requires tens of gigabytes of resident memory and I keep having to kill the program and
|
||||||
|
start over.
|
||||||
|
|
||||||
|
a 60-megabyte OBJ file turns into 21 GB of resident data in the modelling software.
|
||||||
|
|
||||||
|
I have 32GB of RAM installed
|
||||||
|
|
||||||
|
that 21GB expands to 30 when I try manipulating it
|
||||||
|
------------------------------------------
|
||||||
|
|
||||||
|
Zulip from two weeks later (July 7):
|
||||||
|
--------------------------------------
|
||||||
|
|
||||||
|
Two weeks later I'm finally printing something out. I've given up on converting it into a parametric
|
||||||
|
CAD-like object; it seems this is a Hard Problem, but I'm not sure why. My goal with doing that was
|
||||||
|
to give a parametric CAD file to a local CNC milling shop, per their request, since when I suggested
|
||||||
|
a mesh-based file (STL), the guy was like "I can't do much manipulation with that to make it more
|
||||||
|
manufacturable, so a real CAD file would be best".
|
||||||
|
|
||||||
|
But at least with an STL file, I can print it myself. So that's going now, we'll see how it turns
|
||||||
|
out in no less than eight hours.
|
||||||
|
|
||||||
|
I haven't really done anything else with my computer besides this for a while.
|
||||||
|
|
||||||
|
(next day)
|
||||||
|
ok, I got something printed out, but I'm not super stoked on it. Also, I'm still chasing the elusive
|
||||||
|
dream of turning this into a parametric solid for easier CNCing. Vape pen for scale:
|
||||||
|
(insert shitty print photo)
|
||||||
|
|
||||||
|
(next day after that, the 9th)
|
||||||
|
I've finally "finished": I've created a mesh that has no missing faces, is not too crazy, and can be
|
||||||
|
converted into a parametric solid, and sent that off to a local CNC shop for a quote on having it
|
||||||
|
routed out of wood. I'll also do another 3D print, since the base is now a larger version of the
|
||||||
|
coastline instead of a rectangle, and the high frequency detail is a little diminished.
|
||||||
|
|
||||||
|
----------------------------------------
|
||||||
|
|
||||||
|
Links:
|
||||||
|
|
||||||
|
https://data.ca.gov/
|
||||||
|
|
||||||
|
https://portal.opentopography.org/raster?opentopoID=OTSRTM.082015.4326.1
|
||||||
|
|
||||||
|
https://www.printables.com/model/240867-topographic-california
|
||||||
|
|
||||||
|
https://touchterrain.geol.iastate.edu/
|
||||||
|
|
||||||
|
https://en.wikipedia.org/wiki/Shuttle_Radar_Topography_Mission
|
||||||
|
|
|
@ -0,0 +1,9 @@
|
||||||
|
f(x) = x**0.41
|
||||||
|
set terminal png size 600,600
|
||||||
|
set output 'exp_scaling.png'
|
||||||
|
set grid
|
||||||
|
|
||||||
|
set xrange [0:1]
|
||||||
|
set yrange [0:1]
|
||||||
|
|
||||||
|
plot f(x) lw 3 notitle
|
After Width: | Height: | Size: 482 KiB |
After Width: | Height: | Size: 252 KiB |
After Width: | Height: | Size: 1.1 MiB |
BIN
content/sundries/a-thoroughly-digital-artifact/small_ca_topo.png
Normal file
After Width: | Height: | Size: 405 KiB |
After Width: | Height: | Size: 454 KiB |
After Width: | Height: | Size: 444 KiB |
After Width: | Height: | Size: 650 KiB |
After Width: | Height: | Size: 202 KiB |
After Width: | Height: | Size: 335 KiB |
|
@ -1,9 +0,0 @@
|
||||||
+++
|
|
||||||
title = "A Very Digital Artifact"
|
|
||||||
slug = "a-very-digital-artifact"
|
|
||||||
date = "2022-11-11"
|
|
||||||
[taxonomies]
|
|
||||||
tags = ["3dprinting", "CAD", "GIS", "CNC", "art", "sundries"]
|
|
||||||
+++
|
|
||||||
|
|
||||||
![a CNC-carved exaggerated relief of California made of plywood](PXL_20220723_214758454.jpg)
|
|
BIN
content/sundries/shit-code/birth_of_freedomdates.png
Normal file
After Width: | Height: | Size: 98 KiB |
BIN
content/sundries/shit-code/freedoms_birthday.png
Normal file
After Width: | Height: | Size: 20 KiB |
312
content/sundries/shit-code/index.md
Normal file
|
@ -0,0 +1,312 @@
|
||||||
|
+++
|
||||||
|
title = "Shit-code and Other Performance Arts"
|
||||||
|
slug = "shit-code-and-performance-art"
|
||||||
|
date = "2023-02-08"
|
||||||
|
updated = "2023-02-09"
|
||||||
|
[taxonomies]
|
||||||
|
tags = ["software", "art", "sundry", "proclamation", "chaos"]
|
||||||
|
[extra]
|
||||||
|
toc = false
|
||||||
|
+++
|
||||||
|
|
||||||
|
# A sundry collection of intellectual property, some less intellectual than other
|
||||||
|
|
||||||
|
Something I firmly believe is that it's important to make jokes in any medium. Here at NebCorp Heavy
|
||||||
|
Industries & Sundries, despite occasional dabbling with the
|
||||||
|
[physical](@/sundries/a-thoroughly-digital-artifact/index.md), we work primarily with software, and
|
||||||
|
software is one of our primary corporate humor channels. Below is just some of our work there,
|
||||||
|
from least to most useless.
|
||||||
|
|
||||||
|
## *katabastird, a graphical countdown timer*
|
||||||
|
|
||||||
|
[katabastird](https://crates.io/crates/katabastird) is, in its own words, "a simple countdown timer
|
||||||
|
that is configured and launched from the commandline." It looks like this when it's running:
|
||||||
|
|
||||||
|
![katabastird running normally][katabastird_normal]
|
||||||
|
|
||||||
|
It was created for a couple reasons:
|
||||||
|
- I wanted to make a GUI program to learn how to use a [particular library called
|
||||||
|
"egui"](https://github.com/emilk/egui);
|
||||||
|
- I had signed up to give a five-minute talk to demonstrate the latest release of a [commandline
|
||||||
|
argument parsing library called "clap"](https://docs.rs/clap/4.0.0/clap/), which I had titled,
|
||||||
|
"Clap for Clap Four", and I needed a program to showcase it.
|
||||||
|
|
||||||
|
Obviously the best way to showcase a commandline-parsing library is to incorporate it into a
|
||||||
|
graphical program. Other commandline-mission-critical features included changing the color of the
|
||||||
|
background to get more and more red as less time remained
|
||||||
|
|
||||||
|
![katabastird almost done counting down][katabastird_ending]
|
||||||
|
|
||||||
|
and using the font used by the alien in *Predator*
|
||||||
|
|
||||||
|
![get to the choppah][katabastird_predator]
|
||||||
|
|
||||||
|
But by far its greatest feature is an undocumented option, `-A`, that will play an [airhorn
|
||||||
|
salvo](https://gitlab.com/nebkor/katabastird/-/blob/4ccc2e4738df3f9d3af520e2d3875200534f4f6f/resources/airhorn_alarm.mp3)
|
||||||
|
when it's done. This option is visible in the program's help text, but it's not described.
|
||||||
|
|
||||||
|
Truly honestly, this is not a great program. Once it's launched, it only understands two keyboard
|
||||||
|
inputs, `ESC` and `q`, both of which simply cause it to exit. Using the mouse, you can pause,
|
||||||
|
restart, and reset. And that's it, that's all the interaction you get.
|
||||||
|
|
||||||
|
In spite of this, I find myself using it all the time. It's easy to launch with different times (the
|
||||||
|
commandline parsing understands things like `-h` for hours, `-m` for minutes, etc.), and its last
|
||||||
|
invocation is just an up-arrow in my terminal away. The airhorn cracks me up every time.
|
||||||
|
|
||||||
|
At some point, I plan on changing it to something that uses the GPU to run a fire simulation on the
|
||||||
|
numbers, and have the flame intensity get higher as the time remaining gets lower. I'll save that
|
||||||
|
for when I want to get slightly more serious about graphics and shaders, though; it would basically
|
||||||
|
be a total re-write.
|
||||||
|
|
||||||
|
|
||||||
|
As for the name, it's just a perversion of "katabasis", which means, "descent to the Underworld". I
|
||||||
|
guess a bastardized "bastard" is in there, too. Listen, I'm gonna level with you: I'm not wild about the name, but
|
||||||
|
what's done is done.
|
||||||
|
|
||||||
|
## *randical, a commandline program for generating random values*
|
||||||
|
|
||||||
|
Some time ago, I was [trying to work out some ways to pick random points in a
|
||||||
|
sphere](https://blog.joeardent.net/2018/07/right-and-wrong-ways-to-pick-random-points-inside-a-sphere/),
|
||||||
|
and during that exploration, I found myself wanting to just be able to generate random values
|
||||||
|
outside of any program in particular. So, I wrapped a primitive interface around [the random value
|
||||||
|
generation library](https://docs.rs/rand/0.8.0/rand/index.html) I was using. I wound up using it
|
||||||
|
selfishly and in a limited fashion for that project, but afterward, decided to expand it a bit and
|
||||||
|
release it, as my first [real Rust crate](https://crates.io/crates/randical).
|
||||||
|
|
||||||
|
I'll reproduce the help text here, since it's fairly comprehensive:
|
||||||
|
|
||||||
|
``` text
|
||||||
|
$ randical -h
|
||||||
|
Radical Random Value Generator 1.618033
|
||||||
|
|
||||||
|
Generates arbitrary numbers of uniformly distributed random values.
|
||||||
|
|
||||||
|
USAGE:
|
||||||
|
randical [FLAGS] [OPTIONS]
|
||||||
|
|
||||||
|
FLAGS:
|
||||||
|
--buel Prints either 'Here.' or 'Um, he's sick. My best friend's sister's boyfriend's brother's girlfriend
|
||||||
|
heard from this guy who knows this kid who's going with the girl who saw Ferris pass out at 31
|
||||||
|
Flavors last night. I guess it's pretty serious.', with equal probability. Not compatible with `-t`
|
||||||
|
or `--bule`.
|
||||||
|
--bule Prints either 'true' or 'false', with equal probability. Not compatible with `-t` or `--buel`.
|
||||||
|
-e, --exit With equal probability, exit with either status 0, like /bin/true, or status 1, like /bin/false.
|
||||||
|
Technically compatible with all other options, but exit status will have no relation to any
|
||||||
|
generated output. Sets default number of values to print to 0.
|
||||||
|
-h, --help Prints help information
|
||||||
|
-V, --version Prints version information
|
||||||
|
|
||||||
|
OPTIONS:
|
||||||
|
-n, --num-vals <NUM_VALS> Number of random values to print out. Defaults to 1.
|
||||||
|
-t, --type <TYPE> Type of random value to print. Defaults to 'bool'.
|
||||||
|
Possible values are 'b'ool, 'f'loat64, 'U'UIDv4, 'u'nsigned64, 's'igned64, and 'k'suid
|
||||||
|
with millisecond precision.
|
||||||
|
```
|
||||||
|
|
||||||
|
The [README](https://github.com/nebkor/randical/blob/main/README.md) contains some examples of using
|
||||||
|
it to do various things, like simulate a fair coin toss, or an *unfair* coin toss, or "a *Sliding
|
||||||
|
Doors*-style garden of forking paths alternate timeline for Ferris Bueller's presence or absence on
|
||||||
|
that fateful day."
|
||||||
|
|
||||||
|
I have one actual non-shithead usecase for this program: in my [.emacs file, I use it to
|
||||||
|
generate](https://gitlab.com/nebkor/dotfiles/-/blob/3aaf06fc66cdb541b76dfd44d25c369c4762301f/.emacs#L113-116)
|
||||||
|
[ksuids](https://segment.com/blog/a-brief-history-of-the-uuid/). But I don't *really* use it.
|
||||||
|
|
||||||
|
I include it mostly because, by most measurable metrics, this is my most popular program with end
|
||||||
|
users that I can specifically identify:
|
||||||
|
|
||||||
|
![randical's popularity is baffling][randical_downloads]
|
||||||
|
|
||||||
|
Who is downloading my software, and why? I don't know, and more importantly, I don't care or need to
|
||||||
|
know. It's truly better for everyone that way.
|
||||||
|
|
||||||
|
## *freedom-dates, a library neither wanted nor needed*
|
||||||
|
|
||||||
|
When I started writing this post, "freedom-dates" existed strictly as a shit-head idea of mine about
|
||||||
|
the dumbest possible way to represent dates as a string. In fact, I had had it about a month before,
|
||||||
|
while chatting with some friends on Discord.
|
||||||
|
|
||||||
|
![the birth of the birth of freedom][freedomdates-discord]
|
||||||
|
*<center><sup><sub>actually i did ask if i should</sub></sup></center>*
|
||||||
|
|
||||||
|
As usual, I thought tossing a small crate together to realize this joke would take, at most, one
|
||||||
|
hour, and be maybe ten lines long. At least this time, it only took five or six times as long as I
|
||||||
|
thought it would. In its own words, `freedom-dates`
|
||||||
|
|
||||||
|
> provides a convenient suite of affordances for working with dates in *freedom format*. That is, it
|
||||||
|
> takes representations of dates in Communinst formats like "2023-02-08", and liberates them into a
|
||||||
|
> Freedom format like "2/8/23".
|
||||||
|
|
||||||
|
For something like this, where I would not want to actually be accused of punching down or being a
|
||||||
|
jingoistic moron, it's important to be as multidimensionally absurd as possible; I really needed to
|
||||||
|
commit to the bit and provide maximum, richly-textured incongruity.
|
||||||
|
|
||||||
|
Luckily, using the [Rust language](https://www.rust-lang.org/) helps with that in a lot of
|
||||||
|
ways. After I [published it to the official package
|
||||||
|
repository](https://crates.io/crates/freedom-dates), the official documentation site built and
|
||||||
|
published the [autogenerated documentation](https://docs.rs/freedom-dates/latest/freedom_dates/) for
|
||||||
|
it. This leads to the creation of content that looks like this:
|
||||||
|
|
||||||
|
![this is history][freedoms-birthday]
|
||||||
|
|
||||||
|
The slick-looking defaults and basically frictionless process for participating in the Rust
|
||||||
|
ecosystem make it easy for culture-jamming like this. All I had to do was diligently comment, test,
|
||||||
|
and document my code[^just-do-lots-of-work], and the larger systems took care of the rest.
|
||||||
|
|
||||||
|
Rust also provides a lot of different fancy programming tools, like
|
||||||
|
[`Traits`](https://docs.rs/freedom-dates/latest/freedom_dates/trait.FreedomTime.html), that allow
|
||||||
|
you to dress up deeply unserious content in deeply serious costume.
|
||||||
|
|
||||||
|
In all real seriousness, though, I hope that seeing how easy it is to get something this silly
|
||||||
|
published in the official channels inspires you to overcome any trepidation about doing that
|
||||||
|
yourself, if you have something you want to share!
|
||||||
|
|
||||||
|
## *bad_print, a silly program*
|
||||||
|
|
||||||
|
A few years ago, someone at the [Recurse Center](https://recurse.com/)[^rc-link] started a chat
|
||||||
|
thread titled "bad print", and opened it with,
|
||||||
|
|
||||||
|
> you probably didn't know that you needed a bad print function, one that spawns a thread for each
|
||||||
|
> character in your string and prints the single character before quitting... well, now that you
|
||||||
|
> know that you needed this, i've written one for you
|
||||||
|
|
||||||
|
and then pasted a 3-line program in Haskell, and asked for other implementations of "bad print", in
|
||||||
|
any language. I whipped one up using [Rayon](https://github.com/rayon-rs/rayon), a library for doing
|
||||||
|
some things in parallel really easily, but eventually settled on the following, which uses a much
|
||||||
|
smaller and more focused external library called
|
||||||
|
[threadpool](https://github.com/rust-threadpool/rust-threadpool):
|
||||||
|
|
||||||
|
``` rust
|
||||||
|
use std::io::Write;
|
||||||
|
use std::{env, io};
|
||||||
|
|
||||||
|
use threadpool::ThreadPool;
|
||||||
|
|
||||||
|
fn main() {
|
||||||
|
let args: Vec<String> = env::args().collect();
|
||||||
|
if args.len() < 2 {
|
||||||
|
panic!("Please supply a phrase to be badly printed.")
|
||||||
|
}
|
||||||
|
let string = args[1..].join(" ");
|
||||||
|
let num_threads = string.len();
|
||||||
|
|
||||||
|
println!(--------);
|
||||||
|
let pool = ThreadPool::new(num_threads);
|
||||||
|
for c in string.chars() {
|
||||||
|
pool.execute(move || {
|
||||||
|
print!("{}", c);
|
||||||
|
let _ = io::stdout().flush();
|
||||||
|
});
|
||||||
|
}
|
||||||
|
pool.join();
|
||||||
|
println!();
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
I noted about it, relative to earlier versions,
|
||||||
|
|
||||||
|
> It appears to output strings with an even larger edit distance from the arguments given to it,
|
||||||
|
> presumably due to the chaos inherent to harnessing the power of one full tpc (thread per char).
|
||||||
|
|
||||||
|
Indeed, witness it for yourself:
|
||||||
|
|
||||||
|
``` text
|
||||||
|
$ bad_print Behold the awesome power of one full Thread Per Character.
|
||||||
|
--------
|
||||||
|
Bwoesmd elpoh or onh eu Thread earPCh ceearfofelwtter la.
|
||||||
|
```
|
||||||
|
|
||||||
|
By far the most impressive was a bash script that did *Matrix*-style cascading text in your
|
||||||
|
terminal, called, appropriately enough, `bad_matrix`; that particular one was by someone [who's a
|
||||||
|
bit of a shell
|
||||||
|
wizard](https://www.evalapply.org/posts/shell-aint-a-bad-place-to-fp-part-2-functions-as-unix-tools/index.html#main).
|
||||||
|
|
||||||
|
# Other peformance arts
|
||||||
|
|
||||||
|
An artist's medium is all of reality and all of time, so every piece of the work is eligible for
|
||||||
|
expression; the frame is also part of the work. Software in my culture is still embedded in a
|
||||||
|
context that is a bit stuffy, a bit up its ass about things like "copyright" and "semantic
|
||||||
|
versioning"[^smegver], and so they're things I enjoy playing with, too.
|
||||||
|
|
||||||
|
At the bottom of the [readme for
|
||||||
|
freedom-dates](https://github.com/nebkor/misfit_toys/blob/master/freedom-dates/README.md), I have
|
||||||
|
the following about the version:
|
||||||
|
|
||||||
|
> Freedom *STARTS* at number 1, baby! And every release is ten times the last, so second release is
|
||||||
|
> 10, then 100, etc. FREEDOM!
|
||||||
|
|
||||||
|
and indeed it is at version 1.0.0; the two `.0`s after the `1` are there to satisfy Cargo's
|
||||||
|
requirements about semver[^smegver].
|
||||||
|
|
||||||
|
## goldver
|
||||||
|
|
||||||
|
When I version software for public consumption, I tend to use a scheme I call
|
||||||
|
"[goldver](https://gitlab.com/nebkor/katabastird/-/blob/main/VERSIONING.md)", short for "Golden
|
||||||
|
Versioning". It works like this:
|
||||||
|
|
||||||
|
> When projects are versioned with goldver, the first version is "1". Note that it is not "1.0", or,
|
||||||
|
> "1.0-prealpha-release-preview", or anything nonsensical like that. As new versions are released,
|
||||||
|
> decimals from *phi*, the [Golden Ratio](https://en.wikipedia.org/wiki/Golden_ratio), are appended
|
||||||
|
> after an initial decimal point. So the second released version will be "1.6", the third would be
|
||||||
|
> "1.61", etc., and on until perfection is asymptotically approached as the number of released
|
||||||
|
> versions goes to infinity.
|
||||||
|
|
||||||
|
In order to be compliant with the semver version structure, the following rule is applied to
|
||||||
|
projects published to the official Rust package repository:
|
||||||
|
|
||||||
|
> Once there have been at least three releases, the version string in the Cargo.toml file will
|
||||||
|
> always be of the form "1.6.x", where x is at least one digit long, starting with "1". Each
|
||||||
|
> subsequent release will append the next digit of phi to x. The number of releases can be
|
||||||
|
> calculated by counting the number of digits in x and adding 2 to that.
|
||||||
|
|
||||||
|
I sincerely believe that this is *better than [semver](https://semver.org/)* for plenty of non-library
|
||||||
|
software. It was Windows 95 and then Windows 2000; obviously there was a lot of change. I don't care
|
||||||
|
about arguing about the whether or not this is a "patch release" or a "minor release" or a "major
|
||||||
|
change". There are no downstream dependents who need to make sure they don't accidentally upgrade to
|
||||||
|
the latest release. If someone wants to update it, they know what they're getting into, and they do
|
||||||
|
it in an inherently manual way.
|
||||||
|
|
||||||
|
## chaos license
|
||||||
|
|
||||||
|
Anything that I can[^chaos-software], I license under the Chaos License, which states,
|
||||||
|
|
||||||
|
> This software is released under the terms of the Chaos License. In cases where the terms of the
|
||||||
|
license are unclear, refer to the [Fuck Around and Find Out
|
||||||
|
License](https://git.sr.ht/~boringcactus/fafol/tree/master/LICENSE-v0.2.md).
|
||||||
|
|
||||||
|
This is about as
|
||||||
|
[business-hostile](https://blog.joeardent.net/2017/01/say-no-to-corporate-friendly-licenses/) as I
|
||||||
|
can imagine, far worse even than the strong copyleft licenses that terrified the lawyers at ILM when
|
||||||
|
I was there. It oozes uncertainty and risk; you'd have to be deranged to seriously engage with
|
||||||
|
it. But if you're just a person? Dive right in, it doesn't really matter!
|
||||||
|
|
||||||
|
---
|
||||||
|
That's about all I have for now, my droogs. Go take what you know and do something weird with it; it
|
||||||
|
may amuse you! You might learn something! You might make someone laugh!
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
[katabastird_normal]: ./katabastird_normal.png "counting down with one hour, twenty-nine minutes, and forty-three seconds remaining"
|
||||||
|
|
||||||
|
[katabastird_ending]: ./katabastird_almost_done.png "counting down with one second remaining"
|
||||||
|
|
||||||
|
[katabastird_predator]: ./katabastird_predator.png "get to the choppah"
|
||||||
|
|
||||||
|
[randical_downloads]: ./randical_installs.png "who the hell are these people?"
|
||||||
|
|
||||||
|
[freedomdates-discord]: ./birth_of_freedomdates.png "a screencap of a conversation where I suggest 'freedom-formatted' dates are 'seconds since july 4 1776'"
|
||||||
|
|
||||||
|
[freedoms-birthday]: ./freedoms_birthday.png "Freedom was born at noon on the Fourth of July, '76, Eastern Time. This is History."
|
||||||
|
|
||||||
|
[^just-do-lots-of-work]: I did more test-writing and documenting for that useless joke project
|
||||||
|
than for most other software I ever write.
|
||||||
|
|
||||||
|
[^rc-link]: See also the link at the bottom of the page here.
|
||||||
|
|
||||||
|
[^smegver]: "semantic versioning" sounds like it could be a good idea: "the versions should be
|
||||||
|
meaningful, and when they change, they should be changed in a way that means something
|
||||||
|
consistent". As usual with these things, it's turned into a prescriptivist cult whose adherents
|
||||||
|
insist that all software be released according to its terms. This is annoying.
|
||||||
|
|
||||||
|
[^chaos-software]: This is basically anything I write by me, for me, as opposed to contributing to
|
||||||
|
someone else's project.
|
BIN
content/sundries/shit-code/katabastird_almost_done.png
Normal file
After Width: | Height: | Size: 62 KiB |
BIN
content/sundries/shit-code/katabastird_normal.png
Normal file
After Width: | Height: | Size: 76 KiB |
BIN
content/sundries/shit-code/katabastird_predator.png
Normal file
After Width: | Height: | Size: 54 KiB |
BIN
content/sundries/shit-code/randical_installs.png
Normal file
After Width: | Height: | Size: 117 KiB |
|
@ -1,6 +1,6 @@
|
||||||
.cards {
|
.cards {
|
||||||
display: grid;
|
display: grid;
|
||||||
grid-template-columns: repeat(auto-fill, minmax(300px, 1fr));
|
grid-template-columns: repeat(auto-fill, minmax(500px, 1fr));
|
||||||
grid-template-rows: auto;
|
grid-template-rows: auto;
|
||||||
gap: 24px;
|
gap: 24px;
|
||||||
padding: 12px 0;
|
padding: 12px 0;
|
||||||
|
|
|
@ -1,3 +1,8 @@
|
||||||
|
.hias-footer {
|
||||||
|
text-align: center;
|
||||||
|
font-size: 0.4rem;
|
||||||
|
}
|
||||||
|
|
||||||
.page-header {
|
.page-header {
|
||||||
font-size: 3em;
|
font-size: 3em;
|
||||||
line-height: 100%;
|
line-height: 100%;
|
||||||
|
@ -28,15 +33,33 @@ header .main {
|
||||||
display: flex;
|
display: flex;
|
||||||
flex-direction: row;
|
flex-direction: row;
|
||||||
flex-wrap: wrap;
|
flex-wrap: wrap;
|
||||||
justify-content: space-between;
|
justify-content: flex-end;
|
||||||
align-items: flex-start;
|
align-items: flex-start;
|
||||||
gap: 12px;
|
gap: 12px;
|
||||||
font-size: 1.5rem;
|
font-size: 1.2rem;
|
||||||
|
|
||||||
/* Otherwise header and menu is too close on small screens*/
|
/* Otherwise header and menu is too close on small screens*/
|
||||||
margin-bottom: 10px;
|
margin-bottom: 10px;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
header .config-title {
|
||||||
|
line-height: 1.3;
|
||||||
|
}
|
||||||
|
|
||||||
|
.nav-header {
|
||||||
|
/* flex-child */
|
||||||
|
flex-grow: 0;
|
||||||
|
/* flex-container */
|
||||||
|
display: flex;
|
||||||
|
flex-direction: row;
|
||||||
|
flex-wrap: wrap;
|
||||||
|
justify-content: flex-end;
|
||||||
|
align-items: flex-even;
|
||||||
|
gap: 4px;
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
.socials {
|
.socials {
|
||||||
/* flex-child */
|
/* flex-child */
|
||||||
flex-grow: 0;
|
flex-grow: 0;
|
||||||
|
@ -67,6 +90,12 @@ header .main {
|
||||||
letter-spacing: -0.5px;
|
letter-spacing: -0.5px;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
h1 { font-size: 1.5rem }
|
||||||
|
h2 { font-size: 1.2rem }
|
||||||
|
|
||||||
|
|
||||||
|
/*
|
||||||
h1,
|
h1,
|
||||||
h2,
|
h2,
|
||||||
h3,
|
h3,
|
||||||
|
@ -76,35 +105,30 @@ h6 {
|
||||||
font-size: 1.2rem;
|
font-size: 1.2rem;
|
||||||
margin-top: 2em;
|
margin-top: 2em;
|
||||||
}
|
}
|
||||||
|
*/
|
||||||
|
|
||||||
h1::before {
|
h1::before {
|
||||||
color: var(--primary-color);
|
color: var(--primary-color);
|
||||||
content: "# ";
|
|
||||||
}
|
}
|
||||||
|
|
||||||
h2::before {
|
h2::before {
|
||||||
color: var(--primary-color);
|
color: var(--primary-color);
|
||||||
content: "## ";
|
|
||||||
}
|
}
|
||||||
|
|
||||||
h3::before {
|
h3::before {
|
||||||
color: var(--primary-color);
|
color: var(--primary-color);
|
||||||
content: "### ";
|
|
||||||
}
|
}
|
||||||
|
|
||||||
h4::before {
|
h4::before {
|
||||||
color: var(--primary-color);
|
color: var(--primary-color);
|
||||||
content: "#### ";
|
|
||||||
}
|
}
|
||||||
|
|
||||||
h5::before {
|
h5::before {
|
||||||
color: var(--primary-color);
|
color: var(--primary-color);
|
||||||
content: "##### ";
|
|
||||||
}
|
}
|
||||||
|
|
||||||
h6::before {
|
h6::before {
|
||||||
color: var(--primary-color);
|
color: var(--primary-color);
|
||||||
content: "###### ";
|
|
||||||
}
|
}
|
||||||
|
|
||||||
@media (prefers-color-scheme: dark) {
|
@media (prefers-color-scheme: dark) {
|
||||||
|
|
|
@ -1,6 +1,10 @@
|
||||||
img {
|
img {
|
||||||
border: 3px solid #ececec;
|
border: 3px solid #ececec;
|
||||||
max-width: 100%;
|
max-width: 100%;
|
||||||
|
display: block;
|
||||||
|
margin-left: auto;
|
||||||
|
margin-right: auto;
|
||||||
|
margin-bottom: 3px;
|
||||||
}
|
}
|
||||||
|
|
||||||
figure {
|
figure {
|
||||||
|
@ -31,5 +35,5 @@ figure h4::before {
|
||||||
}
|
}
|
||||||
|
|
||||||
svg {
|
svg {
|
||||||
max-height: 15px;
|
max-height: 30px;
|
||||||
}
|
}
|
|
@ -1,5 +1,6 @@
|
||||||
.tags li::before {
|
.tags li::before {
|
||||||
content: "🏷 ";
|
content: "🏷 ";
|
||||||
|
font-size: 0.5rem;
|
||||||
}
|
}
|
||||||
|
|
||||||
.tags a {
|
.tags a {
|
||||||
|
|
|
@ -11,6 +11,12 @@
|
||||||
background-color: var(--primary-color);
|
background-color: var(--primary-color);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
.caption {
|
||||||
|
font-size: 0.5em;
|
||||||
|
font-style: italic;
|
||||||
|
text-align: center;
|
||||||
|
}
|
||||||
|
|
||||||
::-moz-selection {
|
::-moz-selection {
|
||||||
background: var(--primary-color);
|
background: var(--primary-color);
|
||||||
color: var(--hover-color);
|
color: var(--hover-color);
|
BIN
static/images/programmers_creed.jpg
Normal file
After Width: | Height: | Size: 140 KiB |
|
@ -1,270 +0,0 @@
|
||||||
// GoatCounter: https://www.goatcounter.com
|
|
||||||
// This file (and *only* this file) is released under the ISC license:
|
|
||||||
// https://opensource.org/licenses/ISC
|
|
||||||
;(function() {
|
|
||||||
'use strict';
|
|
||||||
|
|
||||||
if (window.goatcounter && window.goatcounter.vars) // Compatibility with very old version; do not use.
|
|
||||||
window.goatcounter = window.goatcounter.vars
|
|
||||||
else
|
|
||||||
window.goatcounter = window.goatcounter || {}
|
|
||||||
|
|
||||||
// Load settings from data-goatcounter-settings.
|
|
||||||
var s = document.querySelector('script[data-goatcounter]')
|
|
||||||
if (s && s.dataset.goatcounterSettings) {
|
|
||||||
try { var set = JSON.parse(s.dataset.goatcounterSettings) }
|
|
||||||
catch (err) { console.error('invalid JSON in data-goatcounter-settings: ' + err) }
|
|
||||||
for (var k in set)
|
|
||||||
if (['no_onload', 'no_events', 'allow_local', 'allow_frame', 'path', 'title', 'referrer', 'event'].indexOf(k) > -1)
|
|
||||||
window.goatcounter[k] = set[k]
|
|
||||||
}
|
|
||||||
|
|
||||||
var enc = encodeURIComponent
|
|
||||||
|
|
||||||
// Get all data we're going to send off to the counter endpoint.
|
|
||||||
var get_data = function(vars) {
|
|
||||||
var data = {
|
|
||||||
p: (vars.path === undefined ? goatcounter.path : vars.path),
|
|
||||||
r: (vars.referrer === undefined ? goatcounter.referrer : vars.referrer),
|
|
||||||
t: (vars.title === undefined ? goatcounter.title : vars.title),
|
|
||||||
e: !!(vars.event || goatcounter.event),
|
|
||||||
s: [window.screen.width, window.screen.height, (window.devicePixelRatio || 1)],
|
|
||||||
b: is_bot(),
|
|
||||||
q: location.search,
|
|
||||||
}
|
|
||||||
|
|
||||||
var rcb, pcb, tcb // Save callbacks to apply later.
|
|
||||||
if (typeof(data.r) === 'function') rcb = data.r
|
|
||||||
if (typeof(data.t) === 'function') tcb = data.t
|
|
||||||
if (typeof(data.p) === 'function') pcb = data.p
|
|
||||||
|
|
||||||
if (is_empty(data.r)) data.r = document.referrer
|
|
||||||
if (is_empty(data.t)) data.t = document.title
|
|
||||||
if (is_empty(data.p)) data.p = get_path()
|
|
||||||
|
|
||||||
if (rcb) data.r = rcb(data.r)
|
|
||||||
if (tcb) data.t = tcb(data.t)
|
|
||||||
if (pcb) data.p = pcb(data.p)
|
|
||||||
return data
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check if a value is "empty" for the purpose of get_data().
|
|
||||||
var is_empty = function(v) { return v === null || v === undefined || typeof(v) === 'function' }
|
|
||||||
|
|
||||||
// See if this looks like a bot; there is some additional filtering on the
|
|
||||||
// backend, but these properties can't be fetched from there.
|
|
||||||
var is_bot = function() {
|
|
||||||
// Headless browsers are probably a bot.
|
|
||||||
var w = window, d = document
|
|
||||||
if (w.callPhantom || w._phantom || w.phantom)
|
|
||||||
return 150
|
|
||||||
if (w.__nightmare)
|
|
||||||
return 151
|
|
||||||
if (d.__selenium_unwrapped || d.__webdriver_evaluate || d.__driver_evaluate)
|
|
||||||
return 152
|
|
||||||
if (navigator.webdriver)
|
|
||||||
return 153
|
|
||||||
return 0
|
|
||||||
}
|
|
||||||
|
|
||||||
// Object to urlencoded string, starting with a ?.
|
|
||||||
var urlencode = function(obj) {
|
|
||||||
var p = []
|
|
||||||
for (var k in obj)
|
|
||||||
if (obj[k] !== '' && obj[k] !== null && obj[k] !== undefined && obj[k] !== false)
|
|
||||||
p.push(enc(k) + '=' + enc(obj[k]))
|
|
||||||
return '?' + p.join('&')
|
|
||||||
}
|
|
||||||
|
|
||||||
// Show a warning in the console.
|
|
||||||
var warn = function(msg) {
|
|
||||||
if (console && 'warn' in console)
|
|
||||||
console.warn('goatcounter: ' + msg)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Get the endpoint to send requests to.
|
|
||||||
var get_endpoint = function() {
|
|
||||||
var s = document.querySelector('script[data-goatcounter]')
|
|
||||||
if (s && s.dataset.goatcounter)
|
|
||||||
return s.dataset.goatcounter
|
|
||||||
return (goatcounter.endpoint || window.counter) // counter is for compat; don't use.
|
|
||||||
}
|
|
||||||
|
|
||||||
// Get current path.
|
|
||||||
var get_path = function() {
|
|
||||||
var loc = location,
|
|
||||||
c = document.querySelector('link[rel="canonical"][href]')
|
|
||||||
if (c) { // May be relative or point to different domain.
|
|
||||||
var a = document.createElement('a')
|
|
||||||
a.href = c.href
|
|
||||||
if (a.hostname.replace(/^www\./, '') === location.hostname.replace(/^www\./, ''))
|
|
||||||
loc = a
|
|
||||||
}
|
|
||||||
return (loc.pathname + loc.search) || '/'
|
|
||||||
}
|
|
||||||
|
|
||||||
// Run function after DOM is loaded.
|
|
||||||
var on_load = function(f) {
|
|
||||||
if (document.body === null)
|
|
||||||
document.addEventListener('DOMContentLoaded', function() { f() }, false)
|
|
||||||
else
|
|
||||||
f()
|
|
||||||
}
|
|
||||||
|
|
||||||
// Filter some requests that we (probably) don't want to count.
|
|
||||||
goatcounter.filter = function() {
|
|
||||||
if ('visibilityState' in document && document.visibilityState === 'prerender')
|
|
||||||
return 'visibilityState'
|
|
||||||
if (!goatcounter.allow_frame && location !== parent.location)
|
|
||||||
return 'frame'
|
|
||||||
if (!goatcounter.allow_local && location.hostname.match(/(localhost$|^127\.|^10\.|^172\.(1[6-9]|2[0-9]|3[0-1])\.|^192\.168\.|^0\.0\.0\.0$)/))
|
|
||||||
return 'localhost'
|
|
||||||
if (!goatcounter.allow_local && location.protocol === 'file:')
|
|
||||||
return 'localfile'
|
|
||||||
if (localStorage && localStorage.getItem('skipgc') === 't')
|
|
||||||
return 'disabled with #toggle-goatcounter'
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
|
|
||||||
// Get URL to send to GoatCounter.
|
|
||||||
window.goatcounter.url = function(vars) {
|
|
||||||
var data = get_data(vars || {})
|
|
||||||
if (data.p === null) // null from user callback.
|
|
||||||
return
|
|
||||||
data.rnd = Math.random().toString(36).substr(2, 5) // Browsers don't always listen to Cache-Control.
|
|
||||||
|
|
||||||
var endpoint = get_endpoint()
|
|
||||||
if (!endpoint)
|
|
||||||
return warn('no endpoint found')
|
|
||||||
|
|
||||||
return endpoint + urlencode(data)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Count a hit.
|
|
||||||
window.goatcounter.count = function(vars) {
|
|
||||||
var f = goatcounter.filter()
|
|
||||||
if (f)
|
|
||||||
return warn('not counting because of: ' + f)
|
|
||||||
|
|
||||||
var url = goatcounter.url(vars)
|
|
||||||
if (!url)
|
|
||||||
return warn('not counting because path callback returned null')
|
|
||||||
|
|
||||||
var img = document.createElement('img')
|
|
||||||
img.src = url
|
|
||||||
img.style.position = 'absolute' // Affect layout less.
|
|
||||||
img.style.bottom = '0px'
|
|
||||||
img.style.width = '1px'
|
|
||||||
img.style.height = '1px'
|
|
||||||
img.loading = 'eager'
|
|
||||||
img.setAttribute('alt', '')
|
|
||||||
img.setAttribute('aria-hidden', 'true')
|
|
||||||
|
|
||||||
var rm = function() { if (img && img.parentNode) img.parentNode.removeChild(img) }
|
|
||||||
img.addEventListener('load', rm, false)
|
|
||||||
document.body.appendChild(img)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Get a query parameter.
|
|
||||||
window.goatcounter.get_query = function(name) {
|
|
||||||
var s = location.search.substr(1).split('&')
|
|
||||||
for (var i = 0; i < s.length; i++)
|
|
||||||
if (s[i].toLowerCase().indexOf(name.toLowerCase() + '=') === 0)
|
|
||||||
return s[i].substr(name.length + 1)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Track click events.
|
|
||||||
window.goatcounter.bind_events = function() {
|
|
||||||
if (!document.querySelectorAll) // Just in case someone uses an ancient browser.
|
|
||||||
return
|
|
||||||
|
|
||||||
var send = function(elem) {
|
|
||||||
return function() {
|
|
||||||
goatcounter.count({
|
|
||||||
event: true,
|
|
||||||
path: (elem.dataset.goatcounterClick || elem.name || elem.id || ''),
|
|
||||||
title: (elem.dataset.goatcounterTitle || elem.title || (elem.innerHTML || '').substr(0, 200) || ''),
|
|
||||||
referrer: (elem.dataset.goatcounterReferrer || elem.dataset.goatcounterReferral || ''),
|
|
||||||
})
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
Array.prototype.slice.call(document.querySelectorAll("*[data-goatcounter-click]")).forEach(function(elem) {
|
|
||||||
if (elem.dataset.goatcounterBound)
|
|
||||||
return
|
|
||||||
var f = send(elem)
|
|
||||||
elem.addEventListener('click', f, false)
|
|
||||||
elem.addEventListener('auxclick', f, false) // Middle click.
|
|
||||||
elem.dataset.goatcounterBound = 'true'
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
// Add a "visitor counter" frame or image.
|
|
||||||
window.goatcounter.visit_count = function(opt) {
|
|
||||||
on_load(function() {
|
|
||||||
opt = opt || {}
|
|
||||||
opt.type = opt.type || 'html'
|
|
||||||
opt.append = opt.append || 'body'
|
|
||||||
opt.path = opt.path || get_path()
|
|
||||||
opt.attr = opt.attr || {width: '200', height: (opt.no_branding ? '60' : '80')}
|
|
||||||
|
|
||||||
opt.attr['src'] = get_endpoint() + 'er/' + enc(opt.path) + '.' + enc(opt.type) + '?'
|
|
||||||
if (opt.no_branding) opt.attr['src'] += '&no_branding=1'
|
|
||||||
if (opt.style) opt.attr['src'] += '&style=' + enc(opt.style)
|
|
||||||
if (opt.start) opt.attr['src'] += '&start=' + enc(opt.start)
|
|
||||||
if (opt.end) opt.attr['src'] += '&end=' + enc(opt.end)
|
|
||||||
|
|
||||||
var tag = {png: 'img', svg: 'img', html: 'iframe'}[opt.type]
|
|
||||||
if (!tag)
|
|
||||||
return warn('visit_count: unknown type: ' + opt.type)
|
|
||||||
|
|
||||||
if (opt.type === 'html') {
|
|
||||||
opt.attr['frameborder'] = '0'
|
|
||||||
opt.attr['scrolling'] = 'no'
|
|
||||||
}
|
|
||||||
|
|
||||||
var d = document.createElement(tag)
|
|
||||||
for (var k in opt.attr)
|
|
||||||
d.setAttribute(k, opt.attr[k])
|
|
||||||
|
|
||||||
var p = document.querySelector(opt.append)
|
|
||||||
if (!p)
|
|
||||||
return warn('visit_count: append not found: ' + opt.append)
|
|
||||||
p.appendChild(d)
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
// Make it easy to skip your own views.
|
|
||||||
if (location.hash === '#toggle-goatcounter') {
|
|
||||||
if (localStorage.getItem('skipgc') === 't') {
|
|
||||||
localStorage.removeItem('skipgc', 't')
|
|
||||||
alert('GoatCounter tracking is now ENABLED in this browser.')
|
|
||||||
}
|
|
||||||
else {
|
|
||||||
localStorage.setItem('skipgc', 't')
|
|
||||||
alert('GoatCounter tracking is now DISABLED in this browser until ' + location + ' is loaded again.')
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if (!goatcounter.no_onload)
|
|
||||||
on_load(function() {
|
|
||||||
// 1. Page is visible, count request.
|
|
||||||
// 2. Page is not yet visible; wait until it switches to 'visible' and count.
|
|
||||||
// See #487
|
|
||||||
if (!('visibilityState' in document) || document.visibilityState === 'visible')
|
|
||||||
goatcounter.count()
|
|
||||||
else {
|
|
||||||
var f = function(e) {
|
|
||||||
if (document.visibilityState !== 'visible')
|
|
||||||
return
|
|
||||||
document.removeEventListener('visibilitychange', f)
|
|
||||||
goatcounter.count()
|
|
||||||
}
|
|
||||||
document.addEventListener('visibilitychange', f)
|
|
||||||
}
|
|
||||||
|
|
||||||
if (!goatcounter.no_events)
|
|
||||||
goatcounter.bind_events()
|
|
||||||
})
|
|
||||||
})();
|
|
||||||
|
|
27
static/js/footnoter.js
Normal file
|
@ -0,0 +1,27 @@
|
||||||
|
// The DOMContentLoaded event fires when the initial HTML
|
||||||
|
// document has been completely loaded and parsed, without
|
||||||
|
// waiting for stylesheets, images, and subframes to finish loading.
|
||||||
|
document.addEventListener('DOMContentLoaded', (_event) => {
|
||||||
|
const references = document.getElementsByClassName('footnote-reference')
|
||||||
|
// For each footnote reference, set an id so we can refer to it from the definition.
|
||||||
|
// If the definition had an id of 'some_id', then the reference has id `some_id_ref`.
|
||||||
|
for (const reference of references) {
|
||||||
|
const link = reference.firstChild
|
||||||
|
const id = link.getAttribute('href').slice(1) // skip the '#'
|
||||||
|
link.setAttribute('id', `${id}_ref`)
|
||||||
|
}
|
||||||
|
|
||||||
|
const footnotes = document.getElementsByClassName('footnote-definition-label')
|
||||||
|
// For each footnote-definition, add an anchor element with an href to its corresponding reference.
|
||||||
|
// The text used for the added anchor is 'Leftwards Arrow with Hook' (U+21A9).
|
||||||
|
for (const footnote of footnotes) {
|
||||||
|
const pid = footnote.parentElement.getAttribute('id')
|
||||||
|
const num = footnote.textContent;
|
||||||
|
footnote.textContent = '';
|
||||||
|
|
||||||
|
const backReference = document.createElement('a')
|
||||||
|
backReference.setAttribute('href', `#${pid}_ref`)
|
||||||
|
backReference.textContent = `${num} ⬑`
|
||||||
|
footnote.append(backReference)
|
||||||
|
}
|
||||||
|
});
|
|
@ -13,7 +13,7 @@
|
||||||
Nothing here?!
|
Nothing here?!
|
||||||
{% endblock main_content %}
|
{% endblock main_content %}
|
||||||
</div>
|
</div>
|
||||||
<a rel="me" href="https://socialnotwork.net/@nebkor">_</a>
|
<a rel="me" href="https://socialnotwork.net/@nebkor"></a>
|
||||||
</body>
|
</body>
|
||||||
|
|
||||||
</html>
|
</html>
|
||||||
|
|
16
templates/home.html
Normal file
|
@ -0,0 +1,16 @@
|
||||||
|
{% extends "base.html" %}
|
||||||
|
|
||||||
|
{% block main_content %}
|
||||||
|
{% if section.extra.section_path -%}
|
||||||
|
{% set section = get_section(path=section.extra.section_path) %}
|
||||||
|
{% endif -%}
|
||||||
|
|
||||||
|
{{ post_macros::page_header(title=section.title) }}
|
||||||
|
|
||||||
|
{%- set tags = get_taxonomy(kind="tags") -%}
|
||||||
|
{%- set term = get_taxonomy_term(kind="tags", term="proclamation") -%}
|
||||||
|
|
||||||
|
{{ post_macros::list_posts(pages = term.pages) }}
|
||||||
|
|
||||||
|
{% endblock main_content %}
|
||||||
|
|
15
templates/page.html
Normal file
|
@ -0,0 +1,15 @@
|
||||||
|
{% extends "base.html" %}
|
||||||
|
|
||||||
|
{% block main_content %}
|
||||||
|
{{ post_macros::content(page=page) }}
|
||||||
|
<hr>
|
||||||
|
{{ post_macros::tags(page=page, short=true) }}
|
||||||
|
<hr>
|
||||||
|
<script src="/js/footnoter.js"></script>
|
||||||
|
<div class=hias-footer>
|
||||||
|
<p>
|
||||||
|
<script async defer src="https://www.recurse-scout.com/loader.js?t=e38ac183ce767b3800a4b4587c00f3fd"></script>
|
||||||
|
<div class="rc-scout"></div>
|
||||||
|
</p>
|
||||||
|
</div>
|
||||||
|
{% endblock main_content %}
|
|
@ -1,25 +1,29 @@
|
||||||
<header>
|
<header>
|
||||||
<div class="main">
|
<div class="config-title">
|
||||||
<a href={{ config.base_url }}>{{ config.title }}</a>
|
<h1><a href={{ config.base_url }}>{{ config.title }}</a></h1>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div class="main">
|
||||||
<div class="socials">
|
<div class="socials">
|
||||||
{% for social in config.extra.socials %}
|
{% for social in config.extra.socials %}
|
||||||
<a href="{{ social.url }}" class="social">
|
<a href="{{ social.url }}" class="social">
|
||||||
<img alt={{ social.name }} src="/social_icons/{{ social.icon }}.svg" rel="me">
|
<img alt={{ social.name }} src="/social_icons/{{ social.icon }}.svg" rel="me">
|
||||||
|
</a>
|
||||||
{% if not loop.last %}
|
{% if not loop.last %}
|
||||||
|
|
|
|
||||||
{% endif %}
|
{% endif %}
|
||||||
</a>
|
|
||||||
{% endfor %}
|
{% endfor %}
|
||||||
</div>
|
</div>
|
||||||
</div>
|
|
||||||
|
|
||||||
<nav>
|
|
||||||
|
</div>
|
||||||
|
<div class="nav-header">
|
||||||
{% for menu in config.extra.menu %}
|
{% for menu in config.extra.menu %}
|
||||||
<a href={{ menu.url }} style="margin-left: 0.7em">{{ menu.name }}</a>
|
<a href={{ menu.url }}>{{ menu.name }}</a>
|
||||||
{% if not loop.last %}
|
{% if not loop.last %}
|
||||||
|
|
|
|
||||||
{% endif %}
|
{% endif %}
|
||||||
{% endfor %}
|
{% endfor %}
|
||||||
</nav>
|
|
||||||
|
</div>
|
||||||
</header>
|
</header>
|
||||||
|
|
|
@ -7,7 +7,7 @@
|
||||||
<ul>
|
<ul>
|
||||||
{% for tag in terms %}
|
{% for tag in terms %}
|
||||||
<h2>
|
<h2>
|
||||||
<a class="post-tag" href="{{ get_taxonomy_url(kind='tags', name=tag.name) | safe }}">#{{ tag.name }}</a> <sup>{{ tag.pages | length }}</sup>
|
<a class="post-tag" href="{{ get_taxonomy_url(kind='tags', name=tag.name) | safe }}">{{ tag.name }}</a> <sup>{{ tag.pages | length }}</sup>
|
||||||
</h2>
|
</h2>
|
||||||
{% endfor %}
|
{% endfor %}
|
||||||
</ul>
|
</ul>
|
||||||
|
|
|
@ -2,7 +2,9 @@
|
||||||
|
|
||||||
{% block main_content %}
|
{% block main_content %}
|
||||||
|
|
||||||
{{ post_macros::list_posts(pages = term.pages) }}
|
<h2>Posts tagged {{ term.name }}</h2>
|
||||||
|
|
||||||
|
{{ post_macros::cards_posts(pages = term.pages) }}
|
||||||
|
|
||||||
{% endblock main_content %}
|
{% endblock main_content %}
|
||||||
|
|
||||||
|
|
|
@ -1 +1 @@
|
||||||
Subproject commit 989bdd129b8bcb474a0336967e7c399433a23f64
|
Subproject commit eb02b7d3c18a397fe5baa394b50fe2c199208dbe
|