Cont
is, almost exactly, Peirce's
law. I remembered seeing a tweet
from Phil Freeman which proves that they are indeed equivalent. I
thought it would be a fun exercise to prove other equivalences from
classical logic.
This post assumes you are familar with:  the CurryHoward correspondence,  classical and intuitionistic logic (for example, see it explained using Coq in Software Foundations), and  one of Haskell, Agda, Idris or Coq.
Haskell and PureScript define MonadCont
, which represent
monads that support the callwithcurrentcontinuation
(callCC
) operation:
class Monad m => MonadCont m where
callCC :: ((a > m b) > m a) > m a
callCC
generally calls the function it receives, passing
it the current continuation (the a > m b
). This acts
like an abort
method, or an early exit.
The interesting part is that this monad looks very similar to Peirce's law:
$ ((P \to Q) \to P) \to P $
If we replace P
with a
(or
m a
) and Q
with m b
, we get the
exact same thing. Since we are dealing with monads, we need to use
Kleisli arrows, so all implications from logic must be lifted as such
(so P > Q
becomes a > m b
).
In order to keep things clean, I decided to wrap each equivalent law
in its own newtype and write an instance of Iso
(which
translates to iff) between each of the laws and the law of excluded
middle.
{# LANGUAGE InstanceSigs #}
{# LANGUAGE MultiParamTypeClasses #}
{# LANGUAGE RankNTypes #}
{# LANGUAGE ScopedTypeVariables #}
module Logic where
import Control.Applicative (liftA2)
import Control.Monad ((<=<))
import Data.Void (Void, absurd)
class Iso a b where
to :: a > b
from :: b > a
This is just a neat way of having to prove both implications in an
iff, packed as to
and from
. Moving on, we can
declare the following types:
Starting with the formula from logic, we can easily write out the Haskell type by just keeping in mind we have to transform all implications to Kleisli arrows:
$ \forall P, Q. ((P \to Q) \to P) \to P $
newtype Peirce m =
Peirce
forall a b
( . ((a > m b) > m a)
> m a
)
The key part to remember here is that negation in classical logic
translates to > Void
in intuitionistic logic (and
> m Void
in our case, since we are using Kleisli
arrows):
$ \forall P. P \lor \neg P $
newtype Lem m =
Lem
forall a
( . m (Either a (a > m Void))
)
Nothing new here, just rewriting negation as
> m Void
:
$ \forall P. \neg \neg P \to P $
newtype DoubleNegation m =
DoubleNegation
forall a
( . ((a > m Void) > m Void)
> m a
)
The only new thing here is that we translate and
to
tuples, and or
to Either:
$ \forall P, Q. \neg (\neg P \land \neg Q) \to P \lor Q $
newtype DeMorgan m =
DeMorgan
forall a b
( . ((a > m Void, b > m Void) > m Void)
> m (Either a b)
)
$ \forall P, Q. (P \to Q) \to Q \lor \neg P $
newtype ImpliesToOr m =
ImpliesToOr
forall a b
( . (a > m b)
> m (Either b (a > m Void))
)
If this is interesting to you, this would be a good place to look away and try for yourself. If you do, keep in mind that typed holes are a very useful tool in this process (see this for an example).
instance Monad m => Iso (Lem m) (Peirce m) where
to :: Lem m > Peirce m
Lem lem) = Peirce proof
to (
where
proof :: ((a > m b) > m a)
> m a
= lem >>= either pure (go abort)
proof abort
go :: ((a > m b) > m a)
> (a > m Void)
> m a
= abort $ fmap absurd . not_a
go abort not_a
from :: Peirce m > Lem m
Peirce p) = Lem $ p go
from (
where
go :: (Either a (a > m Void) > m Void)
> m (Either a (a > m Void))
= pure . Right $ not_lem . Left go not_lem
instance Monad m => Iso (Lem m) (DoubleNegation m) where
to :: Lem m > DoubleNegation m
Lem lem) = DoubleNegation proof
to (
where
proof :: ((a > m Void) > m Void)
> m a
= lem >>= either pure (go notNot)
proof notNot
go :: ((a > m Void) > m Void)
> (a > m Void)
> m a
= fmap absurd $ notNot notA
go notNot notA
from :: DoubleNegation m > Lem m
DoubleNegation dne) = Lem $ dne not_exists_dist from (
instance Monad m => Iso (Lem m) (DeMorgan m) where
to :: Lem m > DeMorgan m
Lem lem) = DeMorgan proof
to (
where
proof :: ((a > m Void, b > m Void) > m Void)
> m (Either a b)
= lem >>= either pure (go notNotANotB)
proof notNotANotB
go :: ((a > m Void, b > m Void) > m Void)
> (Either a b > m Void)
> m (Either a b)
=
go notNotANotB fmap absurd
. notNotANotB
. liftA2 (,) (. Left) (. Right)
from :: DeMorgan m > Lem m
DeMorgan dm) = Lem $ dm go
from (
where
go :: (a > m Void, (a > m Void) > m Void)
> m Void
= notNotA notA go (notA, notNotA)
instance Monad m => Iso (Lem m) (ImpliesToOr m) where
to :: Lem m > ImpliesToOr m
Lem lem) = ImpliesToOr proof
to (
where
proof :: (a > m b)
> m (Either b (a > m Void))
= either Left (go fab) <$> lem
proof fab
go :: (a > m b)
> (b > m Void)
> Either b (a > m Void)
= Right $ notB <=< fab
go fab notB
from :: ImpliesToOr m > Lem m
ImpliesToOr im) = Lem $ im pure from (
The full source code is available on my github.
]]>This post will show how a simple proof works in Logic, Type Theory,
and Category Theory: given A ∧ (B ∧ C)
, prove
(A ∧ B) ∧ C
.
In logic, there are several systems that allows us to reason about propositions. One of them is the natural deduction system and is defined using introduction and elimination rules. For each connective, or operator, we will have at least one of each introduction and elimination rules.
For example, conjunction (∧
) has one introduction
rule:
A B
 (∧i)
A ∧ B
which means, if we know A
and B
, then we
can use the introduction rule (∧i
) to deduce the
proposition A ∧ B
.
There are two elimination rules for ∧
:
A ∧ B A ∧ B
 (∧e1)  (∧e2)
A B
which means, if we know A ∧ B
, we can obtain
A
or B
if we use the elimination rules
∧e1
or ∧e2
.
So, if we wanted to prove the conclusion(A ∧ B) ∧ C)
from the hypothesis A ∧ (B ∧ C)
, we would have to: 1.
obtain an A
by using ∧e1
on the hypothesis 2.
obtain a B ∧ C
by using ∧e2
on the hypothesis
3. obtain a B
by using ∧e1
on (2) 4. obtain a
C
by using ∧e2
on (2) 5. obtain a
A ∧ B
by using ∧i
on (1) and (3) 6. reach the
conclusion (A ∧ B) ∧ C
by using ∧i
on (5) and
(4)
In natural deduction, it looks like this:
A ∧ (B ∧ C) A ∧ (B ∧ C)
 (∧e1)  (∧e2)
A B ∧ C
.  (∧e1)  (∧e2)
A B C
 (∧i) .
A ∧ B C
 (∧i)
(A ∧ B) ∧ C
The CurryHoward correspondence tells us that conjunction translates
to pairs in type theory, so we'll switch notation to Haskell's tuple
type, using the following notation:  Types: capital letters
A
B
C
D
 Terms:
lowercase letters a
b
c
d
 Tuple Types: (A, B)
for the tuple
A
B
 Tuple Terms: (a, b)
for the
tuple a
b
of type (A, B)
Typed lambda calculus has a deduction system as well. Tuple
introduction looks very similar to ∧i
:
a : A b : B
 ((,)i)
(a, b) : (A, B)
which means, given a term a
of type A
and a
term b
of type B
, then we can obtain a term
(a, b)
of type (A, B)
. Note that we no longer
need to say "given we know A
and B
",
since the existence of a term of each type is enough to form the
tuple.
Similarly, there are two elimination rules:
(a, b) : (A, B) (a, b) : (A, B)
 ((,)e1)  ((,)e2)
a : A b : B
which means, given a tuple (a, b)
of type
(A, B)
we can obtain a term a
or
b
of type A
or B
.
If we translate the proposition above, then we have to prove
((A, B), C)
from (A, (B, C))
.
(a, (b, c) : (A, (B, C)) (a, (b, c)) : (A, (B, C))
 ((,)e1)  ((,)e2)
a : A (b, c) : (B, C)
.  ((,)e1)  ((,)e2)
a : A b : B c : C
 ((,)i) .
(a, b) : (A, B) c : C
 ((,)i)
((a, b), c) : ((A, B), C)
The form is identical to the logic proof, except we have terms and
the rules use (,)
instead of ∧
.
We can write the same thing in Haskell:
assoc :: (a, (b, c)) > ((a, b), c)
= ((a, b), c) assoc (a, (b, c))
However, this takes advantage of a powerful Haskell feature known as pattern matching.
Given the proof above, it's easy to noice that (,)i
is
exactly the tuple constructor, (,)e1
is fst
and (,)e2
is snd
. Knowing this, and looking at
the proof above, we could say, given hypothesis
h = (a, (b, c)) : (A, (B, C))
, we can obtain:
a : A
from fst h
(b, c) : (B, C)
from snd h
b : B
from fst (snd h)
c : C
from snd (snd h)
(a, b) : (A, B)
from
(fst h, fst (snd h))
((a, b), c) : ((A, B), C)
from
((fst h, fst (snd h)), snd (snd h))
So, in Haskell:
assoc' :: (a, (b, c)) > ((a, b), c)
assoc' h = ((fst h, fst (snd h)), snd (snd h))
This is a neat effect of the CurryHoward correspondence: proofs are programs. So, once we write the proof, we also have the program. We could even write the program and then extract the proof  it's really the same thing.
The CurryHowardLambek extends the correspondence to include CT as well. The correspondence connects propositions to objects, arrows to implication, conjunction to categorical products, etc.
While in logic we said "given a proof of A
", and in type
theory we said "given a term of type A
", the only way we
can do the same in CT is to say "given an arrow from the terminal object
T
to A
, f : T → A
". This works
because the terminal object represents True
/
Unit
in logic / type theory, so it means "given we can
deduce A
from True
", or "given we can obtain a
term a : A
from () : ()
".
Armed with this, we can now express the same problem in CT terms: 
given an arrow h : T → (A × (B × C))
 obtain an arrow
p : T → ((A × B) × C))
Before we begin, let's review what a product is:  given
A × B
, we know there are two arrows
p : A × B → A
and q : A × B → B
, which we will
write as <p, q>
 given A × B
is the
product of A
and B
, and C
is an
object with two arrows p' : C → A
and
q' : C → B
, there exists an unique arrow
m : C → A × B
such that p ∘ m = p'
and
q ∘ m = q'
Also, remember that we can compose any two arrows
f : A → B
and g : B → C
via
g ∘ f
.
Now we are ready for the proof:
T
is the terminal object, and
t : T → A × (B × C)
is what we start with. We need to be
able to obtain an arrow t' : T → (A × B) × C)
.
By product A × (B × C)
, we know there
exists p : A × (B × C) → A
and
q : A × (B × C) → B × C
.
By composition, we can obtain the arrows
p ∘ t : T → A
and q ∘ t : T → B × C
.
By product B × C
, we know there exists
p' : B × C → B
and q' : B × C → C
.
By composition, we can obtain the arrow
p' ∘ q ∘ t : T → B
.
So now, we have the following arrows:  p ∘ t : T → A

p' ∘ q ∘ t : T → B
By definition of product, since we know
A × B
is the product of A
and B
,
and since we have the arrows T → A
and T → B
,
then we know there must be an unique arrow which we'll name
l : T → A × B
.
By composition we can obtain the arrow
q' ∘ q ∘ t : T → C
.
Similarly to the step before, by definition of
product, since we know (A × B) × C
is a
product of A × B
and C
, and since we have the
arrows l : T → A × B
and q' ∘ q ∘ t : T → C
,
then there must exist an unique arrow
t' : T → (A × B) × C
.
Note: there are, in fact, as many arrows T → (A × B) × C
as are elements in (A × B) × C
, but t'
is the
unique one derived from the initial arrow, t
.
Edit: See this twitter thread for a whiteboard proof of sum associativity.
If we follow the CT arrows as we followed the logic proof:  we could
rewrite the l : T → A × B
arrow as
<i,j> : T → A × B
, where
i = p ∘ t : T → A
and j = p' ∘ q ∘ t : T → B
.
 we already have k = q' ∘ q ∘ t : T → C
So, if instead of t
we write a_bc
to denote
our hypothesis, or inputs, let's look closer at what i
,
j
and k
are:  i
is
p ∘ t
, which is the left projection of the premise, or
fst a_bc
You may ask: Why?!? Well, p ∘ t
means
p after t
. In our case, t
represents the
input, so it's equivalent to a_bc
, and p
is
the left projection, which is equivalent to fst
. Keep in
mind that a ∘ b ∘ c
means
c first, then b, then a
when reading the following.
j
is p' ∘ q ∘ t
, which is
fst (snd a_bc)
l = <i,j>
, so
l = (fst a_bc, fst (snd a_bc))
k
is snd (snd a_bc)
T → (A × B) × C
is
< <i,j>, k > = ((fst a_bc, fst (snd a_bc)), snd (snd a_bc))
If we look back at the Haskell definition:
assoc a_bc = ((fst a_bc, fst (snd a_bc)), snd (snd a_bc))
Which means we reached the same implementation/proof, again.
Edit: Thank you to Bartosz Milewski and GhiOm for their early feedback.
]]>This post will go a bit further than that and show the type theoretic equivalents of existential and universal quantifiers. I'll then explore some interesting properties of these types. This post will not go into the category theory part of this, although I may do that in a future post.
Forall (∀) is the universal quantifier and is generally written as
∀ x. P x
where x
is a variable and P
is a predicate
taking such a variable. A basic example of such a proposition could be:
"For all numbers x, if you add one to x, you get a greater number
than x", or:
∀ x. x + 1 > x
Similarly, exists (∃) is the existential quantifier and is written as
∃ x. P x
where x
is a variable and P
is a predicate,
for example: "there exists a number that is greater than 10",
or:
∃ x. x > 10
Please note that in classical logic, you can prove an existential
proposition by either finding an x
for which
P(x)
is true, or by assuming there does not exist
such an x
and reaching a contradiction (proof by
contradiction). In intuitionistic logic, the latter is not
possible: we have to find the x
. One could then say that an
existential quantifier in intuitionistic logic is described by a pair of
x
and P(x)
.
In the next chapter, we will look at dependent sum and I will say
it's the CurryHoward correspondent of existential quantifiers. Most
theorem provers that rely on this correspondence will use make use of proof
irrelevance which essentially means that it should not matter
whether one picks 11
or 12
in order to to
prove ∃ x. x > 10
: the proofs should be equivalent. We
will not look into this, nor will we make use of proof irrelevance in
this post.
Dependent sums (Σ) are the type theoretic equivalent of existential quantifiers. In Agda, we can define the dependent sum type as:
data Σ {A : Set} (P : A → Set) : Set where
: ∀ (a : A) → P a → Σ P Σ_intro
The ∑ type is a higherkinded type which takes a higherkinded type,
P : A → Set
 P
takes an A
and
gives us a new type (Set
, in Agda). The nice part about
this is that P
holds information about both the type of the
existential variable (A
) as well as the type of the
resulting type (P A
).
Constructing such a term requires a term of the existential type
(evidence for A
), and a term of the predicate type
(evidence for P A
). For example, the example above
could be written as ∑_intro 11 (11 > 10)
, assuming there
exists a type >
which expresses the greaterthan
relationship.
Please note that the above example is a simplification and going into
the details of how an inductive type for >
works is
beyond the scope of this post.
Dependent products (∏) are the type theoretic equivalent of universal quantifiers. In Agda, we can define the dependent product type as:
data Π {A : Set} (P : A → Set) : Set where
: (∀ (a : A) → P a) → Π P Π_intro
The ∏ type is also a higherkinded type. Note that this definition is
almost identical to the Σ definition, except for the parantheses used in
the constructor (Π_intro
). This lines up with the intuition
that ∀x. P(X)
can be described by a function
A > P(x)
, where x : A
.
Constructing a ∏ type takes a function from the quantified variable to the type described by the predicate.
Constructing a term would, for example be
∏_intro (λn. n + 1 > n)
.
We will first need to define a constT
function:
: ∀ (X : Set) (Y : Set) → Y → Set
constT = x constT x _ _
This takes two types, X
and Y
. It then
takes a value of type Y
, and ignores it, returning the type
X
.
So, if we take P
to not depend on the
quantified item and define it using constT
, then we can
obtain tuples in the case of ∑ types:
pair : ∀ (A B : Set) → Set
Σpair a b = Σ (constT b a) Σ
Note that Σpair
is a typelevel function that takes two
types and returns the type of pairs.
We can then define a simple pair constructor using the constructor above:
mkPair : ∀ {A : Set} {B : Set} → A → B → Σpair A B
ΣmkPair a b = Σ_intro a b Σ
And we can have the two projections by simple pattern match, returning the appropriate value:
fst : ∀ {A B : Set} → Σpair A B → A
Σfst (Σ_intro a _) = a
Σ
snd : ∀ {A B : Set} → Σpair A B → B
Σsnd (Σ_intro _ b) = b Σ
This works because Σ types are defined as
a > P a > Σ P
, so if we take a P
such
that P a
always is b
, then we get
a > b > Σ
which is essentially a tuple of
a
and b
.
We can now say Σ_snd (Σ_mkPair 1 2)
and get the result
2
.
Similarly, if we take P
to be const B A
, we
can obtain functions out of ∏ types:
function : ∀ (A B : Set) → Set
Πfunction a b = Π (constT b a)
Π
mkFunction : ∀ {A B : Set} → (A → B) → Πfunction A B
ΠmkFunction f = Π_intro f
Π
apply : ∀ {A B : Set} → Πfunction A B → A → B
Πapply (Π_intro f) a = f a Π
As with sum types, this works because Π types are defined as
(a > P a) > Π P
, so if we take P
such
that P a
always is b
, then we get
(a > b) > Π
, which is essentially a function from
a
to b
.
We can now write Πapply (ΠmkFunction (λx. x + 1)) 1
and get the result 2
.
We can obtain sum types from ∑ types by using Bool
as
the variable type, and the predicate returning type
A
for true
, and type B
for
false
:
: ∀ (A B : Set) → Bool → Set
bool = a
bool a _ true = b bool _ b false
Note that a
and b
are types! We can now
write:
sum : ∀ (A B : Set) → Set
Σsum a b = Σ (bool a b) Σ
Now, in order to construct such a type (via left or right), we just need to pass the appropriate boolean value along with an item of the correct type:
sum_left : ∀ {A : Set} (B : Set) → A → Σsum A B
Σsum_left _ a = Σ_intro true a
Σ
sum_right : ∀ {B : Set} (A : Set) → B → Σsum A B
Σsum_right _ b = Σ_intro false b Σ
Eliminating is just a matter of pattern matching on the boolean value and applying the correct function:
sum_elim : ∀ {A B R : Set} → (A → R) → (B → R) → Σsum A B → R
Σsum_elim f _ (Σ_intro true a) = f a
Σsum_elim _ g (Σ_intro false b) = g b Σ
As an example,
Σsum_elim (const "left") (const "right") (Σsum_left Bool 1)
,
and get the result "left"
.
Interestingly, we can also obtain sum types from ∏ types: the idea is to encode the eliminator right into our type! For that we will need the following predicate:
: ∀ (A B R : Set) → Set
prodPredicate = (a → r) → (b → r) → r prodPredicate a b r
This means that given two types A
and B
, we
get a typelevel function from R
to
(A > R) > (B > R) > R
, which is exactly the
eliminator type. Don't worry about Set₁
or Π'
for now:
sum : ∀ (A B : Set) → Set₁
Πsum a b = Π' (prodPredicate a b) Π
This means that in order to build a sum type, we need to pass a type
R
and a function
(A > R) > (B > R) > R
. So, the constructors
will look like:
sumleft : ∀ {A : Set} (B : Set) → A → Πsum A B
Πsumleft _ a = Π'_intro (\_ f _ → f a) Π
The lambda is the only interesting bit: we construct a function that
given a type R
(first _
) and a function
A > R
(named f
), we can return an
R
by calling f a
(the third _
parameter is for the function g : B > R
, which is not
required for the left constructor).
Similarly, we can write a constructor for right:
sumright : ∀ {A : Set} (B : Set) → B → Πsum A B
Πsumright _ b = Π'_intro (\_ _ g → g b) Π
As for the eliminator, we simply require the two functions
A > R
and B > R
in order to pass to
our dependent product and get an R
:
sumelim : ∀ {A B R : Set} → (A → R) → (B → R) → Πsum A B → R
Πsumelim f g (Π'_intro elim) = elim _ f g Π
We've used three typelevel functions to generate a few interesting types:
Function  Σtype  Πtype 

constT  tuple  function 
bool  either  tuple 
prodPredicate    either 
What other interesting typelevel functions can you find for Σ and/or Π types?
You can find the source file here.
]]>* > *
), contravariant functors, invariant
functors, etc.
This post will show an alternate Functor
that can handle
all of the above. I got this idea from the awesome Tom Harding, and he apparently
got it from @Iceland_jack.
Although this is not new, I could not find any blog post or paper covering it.
The problem is quite straightforward. Let's say we want to define a
functor instance for (a, b)
which changes the
a
, to c
using an a > c
function. This should be possible, but there is no way to write it using
Functor
and fmap
.
There are two ways to do this in Haskell using Prelude
:
 by using Bifunctor
/first
, or  by using the
Flip
newtype.
While both the above options work, they are not particularly elegant. On top of that, there is no common Trifunctor package, and flipping arguments around and wrapping/unwrapping newtypes is not very appealing, which means the approach doesn't quite scale well.
There are two problems with Functor
:  f
has the wrong kind if we want to allow higher kinded functors, and  the
arrow of the mapped function is the wrong type if we want to allow
contravariant or invariant functors (or even other types of
mappings!).
We can fix both problems by adding additional types to the class:
class FunctorOf (p :: k > k > Type) (q :: l > l > Type) f where
map :: p a b > q (f a) (f b)
p
represents a relationshiop (arrow) between
a
and b
. In case of a regular functor, it's
just >
, but we can change it to a reverse arrow for
contravariants.
q
is normally just an optional layer on top of
>
, in order to allow mapping over other arguments. For
example, if we want to map over the secondtolast argument, we'd use
natural transforms (~>
).
The regular functor instance can be obtained by simply:
instance Functor f => FunctorOf (>) (>) f where
map :: forall a b. (a > b) > f a > f b
map = fmap
functorExample :: [String]
= map show ([1, 2, 3, 4] :: [Int]) functorExample
I'll use the Bifunctor
instance in order to show all
bifunctors can have such a FunctorOf
instance. Of course,
one could define instances manually for any Bifunctor
.
Going back to our original example, we can define a
FunctorOf
instance for * > * > *
types
in the first argument via:
newtype (~>) f g = Natural (forall x. f x > g x)
instance Bifunctor f => FunctorOf (>) (~>) f where
map :: forall a b. (a > b) > f a ~> f b
map f = Natural $ first f
In order to avoid fiddling about with newtypes, we can define a
helper bimap'
function for * > * > *
that maps both arguments:
bimap' :: forall a b c d f
. FunctorOf (>) (>) (f a)
=> FunctorOf (>) (~>) f
=> (a > b)
> (c > d)
> f a c
> f b d
=
bimap' f g fac case map f of
Natural a2b > a2b (map g fac)
bifunctorExample :: (String, String)
= bimap' show show (1 :: Int, 1 :: Int) bifunctorExample
Okay, cool. But what about contravariant functors? We can
use Op
from Data.Functor.Contravariant
(defined as data Op a b = Op (b > a)
):
instance Contravariant f => FunctorOf Op (>) f where
map :: forall a b. (Op b a) > f b > f a
map (Op f) = contramap f
This is pretty cool since we only need to change the mapped
function's type to be Op
instead of >
! As
before, we can make things easier by defining a helper:
cmap :: forall a b f
. FunctorOf Op (>) f
=> (b > a)
> f a
> f b
= map (Op f) fa
cmap f fa
contraExample :: Predicate Int
= cmap show (Predicate (== "5")) contraExample
I'm glad you asked! It's as easy as 123, or well, as easy as "functor in the last argument"  "contravariant in the previous"  "write helper function":
instance Profunctor p => FunctorOf Op (~>) p where
map :: forall a b. (Op b a) > p b ~> p a
map (Op f) = Natural $ lmap f
dimap' :: forall a b c d p
. FunctorOf (>) (>) (p a)
=> FunctorOf Op (~>) p
=> (b > a)
> (c > d)
> p a c
> p b d
=
dimap' f g pac case map (Op f) of
Natural b2a > b2a (map g pac)
profunctorExample :: String > String
= dimap' read show (+ (1 :: Int)) profunctorExample
Yep. We only need to define a higherkinded natural transform and
write the FunctorOf
instance, along with the helper:
newtype (~~>) f g = NatNat (forall x. f x ~> g x)
data Triple a b c = Triple a b c deriving (Functor)
instance {# overlapping #} FunctorOf (>) (~>) (Triple x) where
map :: forall a b. (a > b) > Triple x a ~> Triple x b
map f = Natural $ \(Triple x a y) > Triple x (f a) y
instance FunctorOf (>) (~~>) Triple where
map :: (a > b) > Triple a ~~> Triple b
map f = NatNat $ Natural $ \(Triple a x y) > Triple (f a) x y
triple :: forall a b c d e f t
. FunctorOf (>) (>) (t a c)
=> FunctorOf (>) (~>) (t a)
=> FunctorOf (>) (~~>) t
=> (a > b)
> (c > d)
> (e > f)
> t a c e
> t b d f
= a2b . c2d . map h
triple f g h where
Natural c2d) = map g
(NatNat (Natural a2b)) = map f
(
tripleExample :: Triple String String String
= triple show show show (Triple (1 :: Int) (2 :: Int) (3 :: Int)) tripleExample
The pattern is pretty simple:  we need a FunctorOf
instance for every argument we want to map  for each such argument, we
need to use >
for variant and Op
for
contravariant arguments as the first argument to FunctorOf
 from right to left, we need to use increasing level of transforms to
map the type arguments (>
, ~>
,
~~>
, etc)
We can define an instance for Endo
using:
data Iso a b = Iso
to :: a > b
{ from :: b > a
,
}
instance FunctorOf Iso (>) Endo where
map :: forall a b. Iso a b > Endo a > Endo b
map Iso { to, from } (Endo f) = Endo $ to . f . from
endoExample :: Endo String
= map (Iso show read) (Endo (+ (1 :: Int))) endoExample
We can even go further:
instance FunctorOf (>) (>) f => FunctorOf Iso Iso f where
map :: Iso a b > Iso (f a) (f b)
map Iso { to, from } = Iso (map to) (map from)
which is to say, given an isomorphism between a
and
b
, we can obtain an isomorphism between f a
and f b
!
I think this instance can be also used for proofs. For example, using
the Refl
equality type:
data x :~: y where
Refl :: x :~: x
And this means we can write transitivity as:
instance FunctorOf (:~:) (>) ((:~:) x) where
map :: forall a b. a :~: b > x :~: a > x :~: b
map Refl Refl = Refl
proof :: Int :~: String > Bool :~: Int > Bool :~: String
= map proof
Code is available here.
Another thing worth mentioning is the awesome upcoming GHC extension (being worked on by Csongor Kiss) which allows type families to be partially applied. If you haven't read the paper, you should! Using this feature, one could do something like:
type family Id a where Id x = x
instance FunctorOf (>) (>) Id
map = ($)
idExample :: Bool
= map (+1) 1 == 2 idExample
Please note I have not tested the above code; it was suggested by Tom Harding (thanks again for the idea and reviewing!).
What other uses can you come up with?
]]>This post was sparked by a few other posts in the Haskell world. They are, to my knowledge, in chronological order:
Snoyman's manifesto is a call to define a safe subset of the Haskell language and common libraries, provide documentation, tutorials, cookbooks, and continuously evolve, update, and help engineers use and get "boring Haskell" adopted.
Parsons notes that Haskell has a hiring problem: there are few jobs, and most of those are for senior developers. The reason for this is that we overindulge in fancy Haskell, making our code needlessly complicated. If we wrote simple, juniorlevel Haskell, we would be able to hire junior developers and have them be productive.
Sampellegrini's post points out a few key problems:
While I understand where all of these feelings are coming from, and I agree to some of the ideas, I think they have their marks on the wrong problem.
I think the real problem is that we are not putting up jobs for junior devs. We're not even giving them a chance. And when we are, we usually don't give them enough support (through training and making sure they know who to ask, and that it's okay to do so) to succeed.
I'm really not sure why we're not hiring more junior developers. It might be because seniors like to think that the code they are writing is so complicated that a junior would take too long to be able to understand, so they advise management that a junior cannot possibly be productive. Maybe it's because they don't want to be bothered with training junior devs, and they would rather just work on code instead? Or maybe it's because management doesn't like seniors' time being "wasted" on teaching junior devs?
Whatever the reason, I don't really think writing simpler code will help much. If the onboarding process is lacking, if the company culture is not welcoming to junior devs, most of them will be set for failure from the getgo, regardless of how fancy or simple the code is.
What is a junior developer? For the purposes of this article, I will
define a Haskell junior developer as somebody who's able to confidently
use simple monads like Maybe
, Either e
,
IO
, and the list monad. Ideally, they would also have a
crude understanding of monad transformers (but not necessarily
mtl
as well). They are able to compose functions, reason
about ADTs, and, perhaps most importantly, are motivated to learn more
about Haskell and FP.
I currently work on two projects, both in Haskell. One of these projects has two junior Haskell developers, and the other has one. I will briefly go over the details of these projects as well as my mentoring experience in order to establish a baseline.
I have not been working with Haskell for very long. I actually come from OOPland, and I have a lot of experience as a team lead. I have hired, trained, and mentored a decent number of junior devs, most of them in my OOP days, but also three of them recently, at the place I currently work. For the past year and a half, I have been the main developer in charge of training and making sure the junior devs are productive.
Our codebases (you can see one of them here) are
pretty complicated: besides the fact that they use notoriously complex
Haskell libraries such as lens
, servant
, and
recursionschemes
, the domain problem is pretty complicated
as well: we're essentially building an automated prover for a
rewritebased executable semantic framework (the other project is a
pretty standard servant app, so not too much to go over there, although
it does use lens
, genericlens
,
persistent
/esqueleto
and obviously
servant
).
This prelude was needed because I can't really speak about junior developers in general, but I can tell you about my experience with onboarding junior Haskell developers on our projects. However, before that, I would like to add that the junior devs we hired were all either senior year at the local university, or fresh graduates. They were picked because they are all excited about FP, despite the fact that none of them had any previous professional experience related to FP or Haskell.
I am proud to say that all three junior devs are doing great. I obviously can't take any significant part of the credit (they are all very smart and hard working), but I think that there are a few things that contributed to their success:
Only one of the three junior developers we hired was slightly familiar with monad transformers at the time they were hired. The other two were familiar with monads. We were able to get all three to contribute PRs in less than a week after they started. Within 3 to 6 months, I noticed they started being able to complete tasks with little supervision. One of them has been with us for little over an year, and they are now able to take on complicated tasks (major refactoring, learning new concepts, etc.) pretty much on their own.
Since the subject is hot, I just saw a tweet
from Joe Kachmar which expresses the very idea I want to combat:
these things aren't THAT hard to teach. Of course a junior won't be able
to invent a new type of lenses, add a new layer to our application's
monad stack, or reinvent genericlens
, but nobody's
expecting them to.
After a week of training, I am sure a junior developer can add a new REST API endpoint that is similar to one that's already in our application. They can use getter lenses similar to the ones we already have, but targeting different fields: they can reuse the existing infrastructure to write "boring" code using whatever level of fancy Haskell is already there as a guide.
And sure, sometimes they'll try something new and they'll get stuck on a 20page GHC type error. That's when they ask for help, because they know it's okay not knowing things, and there's always someone available that's happy to help (and they won't help by fixing the error for them, but by guiding them into understanding and fixing the problem themselves).
It's hard to focus on multiple solutions to the same problem. I am also worried that the "Boring Haskell Manifesto" can even be harmful in the long run.
Writing programs is really, really hard. Nothing focuses this feeling
better than writing pure FP, because it forces you to be clear, precise
and thorough about everything: you can't ignore Nothing
s,
you can't discard Left
s implicitly, you don't get to shove
things into a mutable global state.
Writing programs is really, really hard for everyone. It's not only hard for junior developers. It's also hard for senior developers. We haven't figured this out, we're not even close. We still have a terrible story for errors: their composability is far from ideal. We still have a lot of competing libraries for effects, and more seem to be coming. There are a lot of libraries to be explored and discovered.
I do think that each team should be careful when adding language extensions and choosing libraries for each project they work on. And I do think the "fancyness" needs to be taken into account. As Parsons put it on slack
fanciness of your code should be gated on the size of your mentoring/training budget if you value hiring juniors
I totally agree, although I would also add that another important aspect worth considering is the benefit of said fancyness.
There are many reasons one might want to stray off the beaten path. Fancy typelevel code might save you a ton of code duplication, or it might add features that would otherwise make the code brittle or hard to maintain. For some projects, this may be worth it.
I don't think a blessed set of libraries or extensions will help with
this. Which streaming library gets to be picked? Will it be
conduit
over pipes
? What about
streaming
?
As I said, I think it's the wrong thing to focus on.
We need to stop overappreciating how hard it is to use "fancy"
libraries like servant
, lens
or
recursionschemes
. Give junior developers a fighting chance
and they will surprise you.
I don't think there's anything that makes our company's junior developer success story nonreproducible anywhere else. Our local university doesn't focus on FP or Haskell (they do have one course where they teach Haskell, but that's pretty much it). We were actually forced to take this route because there's no other companies that do Haskell locally (as far as I know), so we can't just find Haskell developers around.
I think this is reproducible anywhere, on pretty much any codebase. We just need to open up junior positions, and give them the support they need to succeed. Have you had some different experience? Is it hard to find junior developers that are somewhat familiar with monads?
Go out there, convince your team that they're not actually living in an ivory tower. It's not that hard, and we're not special for understanding how to use these language extensions and libraries. We can teach junior developers how to use them.
]]>Functor
class
/ concept  the functor instance for Either a
,
(,) a
 basic kind knowledge, e.g. the difference between
* > *
and * > * > *
In Haskell, functors can only be defined for types of kind
* > *
like Maybe a
or [a]
.
Their instances allow us to use fmap
(or
<$>
) to go from Maybe a
to
Maybe b
using some a > b
, like:
> show <$> Just 1
λJust "1"
> show <$> Nothing
λNothing
> show <$> [1, 2, 3]
λ"1", "2", "3"]
[
> show <$> []
λ []
We can even define functor instances for higher kinded types, as long
as we fix type arguments until we get to * > *
. For
example, Either
has kind * > * > *
,
but Either e
has kind * > *
. So that means
that we can have a functor instance for Either e
, given
some type e
. This might sound confusing at first, but all
it means is that the e
cannot vary, so we can go from
Either e a
to Either e b
using some
a > b
, but we cannot go from Either e1 a
to Either e2 a
or Either e2 b
even if we had
both a > b
and e1 > e2
. How would we
even pass two functions to fmap
?
> show <$> Right 1
λRight "1"
> show <$> Left True
λLeft True
In the first example, we go from Either a Int
to
Either a String
using
show :: Int > String
. In the second example, we go from
Either Bool a
to Either Bool String
, where
a
needs to have a Show
instance.
But what if we want to go from Either a x
to
Either b x
, given some a > b
?
Let's see how we could implement it ourselves:
mapLeft :: (a > b) > Either a x > Either b x
Left a) = Left (f a)
mapLeft f (= r mapLeft _ r
Since we are trying to map the Left
, the interesting bit
is for that constructor: we apply f
under
Left
. Otherwise, we just leave the value asis; a
Right
value of type x
(we could have written
mapLeft _ (Right x) = Right x
and it would work the
same).
Here's a few warmup exercises. The first uses typed holes to guide
you and clarify what's meant by "using either
". The last
exercise can be a bit tricky. Look up what Const
is and use
typed holes.
Exercise 1: reimplement mapLeft'
using
either
:
mapLeft' :: (a > b) > Either a x > Either b x
= either _leftCase _rightCase e mapLeft' f e
Exercise 2: implement mapFirst
:
mapFirst :: (a > b) > (a, x) > (b, x)
Exercise 3: implement remapConst
:
import Data.Functor.Const (Const(..))
remapConst :: (a > b) > Const a x > Const b x
While we can implement mapLeft
, mapFirst
,
and remapConst
manually, there is a pattern: some types of
kind * > * > *
allow both their type arguments to
be mapped like a Functor
, so we can define a new class:
class Bifunctor p where
{# MINIMAL bimap  first, second #}
bimap :: (a > b) > (c > d) > p a c > p b d
first :: (a > b) > p a c > p b c
second :: (b > c) > p a b > p a c
bimap
takes two functions and is able to map both
arguments in a type of kind * > * > *
.
first
is a lot like the functions we just defined manually.
second
is always the same thing as fmap
. This
class exists in base
, under
Data.Bifunctor
.
Exercise 4: implement bimap
in terms of
first
and second
.
Exercise 5: implement first
and
second
in terms of bimap
.
Exercise 6: implement the Bifunctor
instance
for Either
:
instance Bifunctor Either where
Left a) = _leftCase
bimap f _ (Right b) = _rightCase bimap _ g (
Exercise 7: Implement the Bifunctor
instance
for tuples (a, b)
.
Exercise 8: Implement the Bifunctor
instance
for Const
.
Exercise 9: Implement the Bifunctor
instance
for (a, b, c)
.
Exercise 10: Find an example of a type with kind
* > * > *
that cannot have a Bifunctor
instance.
Exercise 11: Find an example of a type with kind
* > * > *
which has a Functor
instance
when you fix one type argument, but can't have a Bifunctor
instance.
Functor
class
/ concept  the functor instance for (>) r
Not all higher kinded types * > *
can have a
Functor
instance. While types like Maybe a
,
(x, a)
, r > a
, Either e a
and
[a]
are Functors
in a
, there are
some types that cannot have a Functor
instance. A good
example is Predicate
:
newtype Predicate a = Predicate { getPredicate :: a > Bool }
We call this type a predicate in a
because, given some
value of type a
we can obtain a True
or a
False
. So, for example:  Predicate (> 10)
is a predicate in Int
which returns true if the number is
greater than 10,  Predicate (== "hello")
is a predicate in
String
which returns true if the string is equal to
"hello", and  Predicate not
is a predicate in
Bool
which returns the negation of the boolean value it
receives.
We can try writing a Functor
instance and see what we
can learn:
instance Functor Predicate where
fmap :: (a > b) > Predicate a > Predicate b
fmap f (Predicate g) =
Predicate
$ \b > _welp
As the type hole above would suggest, we need to return a
Bool
value, and we have:  b :: b

f :: a > b
 g :: a > Bool
There is no way we can combine these terms at all, let alone in such
a way as to obtain a Bool
value. The only way we would be
able to obtain a Bool
value is by calling g
,
but for that, we need an a
 but all we have is a
b
.
What if f
was reversed, though? If we had
f' :: b > a
, then we could apply b
to it
f' b :: a
and then pass it to g
to get a
Bool
. Let's write this function outside of the
Functor
class:
mapPredicate :: (b > a) > Predicate a > Predicate b
Predicate g) =
mapPredicate f (Predicate
$ \b > g (f b)
This looks very weird, compared to Functor
s, right? If
you want to go from Predicate a
to
Predicate b
, you need a b > a
function?
Exercise 1: fill in the typed hole _e1
:
greaterThan10 :: Predicate Int
= Predicate (> 10)
greaterThan10
exercise1 :: Predicate String
= mapPredicate _e1 greaterThan10 exercise1
Exercise 2: write mapShowable
for the following
type:
newtype Showable a = Showable { getShowable :: a > String }
mapShowable :: (b > a) > Showable a > Showable b
Exercise 3: Use mapShowable
and
showableInt
to implement exercise3
such that
getShowable exercise3 True
is "1"
and
getShowable exercise3 False
is "2"
.
showableInt :: Showable Int
= Showable show
showableInt
exercise3 :: Showable Bool
= exercise3
Predicate
and Showable
are very similar,
and types like them admit a Contravariant
instance. Let's
start by looking at it:
class Contravariant f where
contramap :: (b > a) > f a > f b
The instances for Predicate
and Showable
are trivial: they are exactly mapPredicate
and
mapShowable
. The difference between Functor
and Contravariant
is exactly the function they receive: one
is "forward" and the other is "backward", and it's all about how the
data type is defined.
All Functor
types have their type parameter
a
in what we call a positive position. This
usually means there can be some a
available in the type
(which is always the case for tuples, or sometimes the case for
Maybe
, Either
or []
). It can also
mean we can obtain an a
, like is the case for
r > a
. Sure, we need some r
to do that,
but we are able to obtain an a
afterwards.
On the opposite side, Contravariant
types have their
type parameter a
in what we call a negative
position: they need to consume an a
in order to
produce something else (a Bool
or a String
for
our examples).
Exercise 4: Look at the following types and decide which can
have a Functor
instance and which can have a
Contravariant
instance. Write the instances down:
data T0 a = T0 a Int
data T1 a = T1 (a > Int)
data T2 a = T2L a  T2R Int
data T3 a = T3
data T4 a = T4L a  T4R a
data T5 a = T5L (a > Int)  T5R (a > Bool)
As with Functor
s, we can partially apply higher kinded
types to write a Contravariant
instance. The most common
case is for the flipped version of >
:
newtype Op a b = Op { getOp :: b > a }
While a > b
has a Functor
instance,
because the type is actually (>) a b
, and
b
is in a positive position, its flipped version
has a Contravariant
instance.
Exercise 5: Write the Contravariant
instance
for Op
:
instance Contravariant (Op r) where
contramap :: (b > a) > Op r a > Op r b
Exercise 6: Write a Contravariant
instance for
Comparison
:
newtype Comparison a = Comparison { getComparison :: a > a > Ordering }
Exercise 7: Can you think of a type that has both
Functor
and Contravariant
instances?
Exercise 8: Can you think of a type that can't have a
Functor
nor a Contravariant
instance? These
types are called Invariant
functors.
We've seen how types of kind * > *
can have
instances for Functor
or Contravariant
,
depending on the position of the type argument. We have also seen that
types of kind * > * > *
can have
Bifunctor
instances. These types are morally
Functor
in both type arguments. We're left with one very
common type which we can't map both arguments of:
a > b
. It does have a Functor
instance for
b
, but the a
is morally
Contravariant
(so it can't have a Bifunctor
instance). This is where Profunctor
s come in.
Here's a list of a few common types with the instances they allow:
Type  Functor 
Bifunctor 
Contravariant 
Profunctor 

Maybe a 
✓  
[a] 
✓  
Either a b 
✓  ✓  
(a,b) 
✓  ✓  
Const a b 
✓  ✓  
Predicate a 
✓  
a > b 
✓  ✓ 
Although there are some exceptions, you will usually see
Contravariant
or Profunctor
instances over
function types. Predicate
itself is a newtype over
a > Bool
, and so are most types with these
instances.
Let's take a closer look at a > b
. We can easily map
over the b
, but what about the a
? For example,
given showInt :: Int > String
, what do we need to
convert this function to showBool :: Bool > String
:
showInt :: Int > String
showInt = show
showBool :: Bool > String
= _help showBool b
We would have access to:  showInt :: Int > String

b :: Bool
and we want to use showInt
, so we
would need a way to pass b
to it, which means we'd need a
function f :: Bool > Int
and then _help
would become showInt (f b)
.
But if we take a step back, in order to go from
Int > String
to Bool > String
, we need
Bool > Int
, which is exactly the
Contravariant
way of mapping types.
Exercise 1: Implement a mapInput
function
like:
mapInput :: (input > out) > (newInput > input) > (newInput > out)
Extra credit: try a pointfree implementation as
mapInput = _
.
Exercise 2: Try to guess how the Profunctor
class looks like. Look at Functor
,
Contravariant
, and Bifunctor
for
inspiration.
class Profunctor p where
Exercise 3: Implement an instance for >
for
your Profunctor
class.
instance Profunctor (>) where
Unlike Functor
, Contravariant
, and
Bifunctor
, the Profunctor
class is not in
base
/Prelude
. You will need to bring in a
package like profunctors
to access it.
class Profunctor p where
{# MINIMAL dimap  lmap, rmap #}
dimap :: (c > a) > (b > d) > p a b > p c d
lmap :: (c > a) > p a b > p c b
rmap :: (b > c) > p a b > p a c
dimap
takes two functions and is able to map both
arguments in a type of kind * > * > *
.
lmap
is like mapInput
. second
is
always the same thing as fmap
.
Exercise 4: implement dimap
in terms of
lmap
and rmap
.
Exercise 5: implement lmap
and
rmap
in terms of dimap
.
Exercise 6: implement the Profunctor
instance
for >
:
instance Profunctor (>) where
 your pick: dimap or lmap and rmap
Exercise 7: (hard) implement the Profunctor
instance for:
data Sum f g a b
= L (f a b)
 R (g a b)
instance (Profunctor f, Profunctor g) => Profunctor (Sum f g) where
Exercise 8: (hard) implement the Profunctor
instance for:
newtype Product f g a b = Product (f a b, g a b)
instance (Profunctor f, Profunctor g) => Profunctor (Product f g) where
I know I'm not special, it's just that the "always knew" stories seem to draw more attention. I know a lot of folks who did not always know.
I had no idea. Literally none. Despite being relatively well informed about trans rights and experiences, I was totally oblivious. And the lack of similar stories delayed and made my selfacceptance harder.
So here I am sharing my story, in hopes that someone will find this useful.
It was almost a year ago. I was 36, had a wife, a 2 year old daughter, a good job... and something was missing. I didn't even consciously think about it, I always blamed some other factors (it's normal to be tired/stressed with a baby, it's just the pandemic, it's this or that). I was never able to conjure the question "Am I transgender?".
And yet, a lot of things were off. I always felt weird or different. I never quite fit in like the others. It wasn't extreme, and it wasn't obvious. It was easy to read as something else, such as introversion, shyness, being a bit weird or closed off, etc.
It was my exwife who asked me, one evening, as if it was nothing, whether I am trans (she recalls asking me whether I was in some way part of the LGBTQ+ community, so maybe I just heard what I needed to hear? not sure).
And my initial reaction was, "Of course I'm not!". I would know, right? I'm 36. There's no way I would not know this about myself. I could have passed a polygraph test saying I am a cis man.
I pushed the thought away, but it came back a few weeks later. What if I am transgender? Those first few days are very fuzzy. I started looking up as much as I could online.
One thing which stood out is something you'll find a lot online: "most cis people don't question their gender". And that sounds reasonable enough to at least give me the fuel I needed to dig deeper.
The next thing I remember is finding out about FaceApp and trying it out. I didn't even have a selfie I could use, so I had to take a selfie. I proceeded to launch the app, play with the sliders and hit the gender swap button. AND HOLY FUCK DID MY JAW DROP. Could I actually be attractive?
That thought alone would ring all the alarm bells now, but hindsight doesn't help past me.
I have no idea how I managed to sleep that night, but it wasn't great. The next day, I started reading about HRT (Hormone Replacement Therapy) and surgeries because that's what a lot of the discourse is, at least what you can easily find through searching. And it's such a shame, because there's an implicit idea being thrown, which is, medically transitioning is mandatory or implicit.
There's really no test, question, or anything that can tell someone whether they are transgender or not. Having said that, there seem to be some thoughts, experiences, things in common for a lot of us. None of them are exclusive to transgender folks, but reading them helped me realise some of the things I've been repressing.
Keep in mind you live in your body 24/7. And we humans are really good at adapting to adverse situations. We learn coping mechanisms, we do mental gymnastics, we make the best out of what we have.
I have always hated my body. I would hate looking in the mirror or having my picture taken. I would not take care of it at all, dress in baggy clothes, etc. In retrospect, it turns out I was in the wrong kind of body.
I hated having to dress up for occasions, so I hated them and made a discourse on how they are silly: a waste of time.
Actually, I was even a little bit proud of my lack of vanity. I didn't waste time on grooming, picking clothes or taking care of myself.
I almost always had long hair and I hated when I cut it short (only did it a couple of times and hated it every time).
I had what I later found out is sometimes called a "dysphoria beard". It's when you don't groom or shave your beard because you hate it, and having to tend to it daily/on a regular basis is annoying, painful, something you avoid.
I almost always chose female avatars in games. My mental gymnastics reasoning was, I'd rather look at a female if I was going to spend hours playing, but at least some of them were first person so that was not it.
I used to repress my feelings a lot. I very rarely cried, or had any strong (positive) emotions, especially when it came to me and my life. In retrospect, it is fairly obvious it was a coping mechanism from repressed thoughts, but I did not realise what it was until recently.
I did, however, cry a lot at movies (or other kinds of media). I would always cry at I'll Make A Man Out Of You. I had no idea why it made me cry, but now it's pretty clear to me that it's because it's about a woman being forced to pretend she's a man.
There were also a lot of things that were not obvious until I read them. For example, a sense of detachment or enstrangement from my own feelings and body. Feeling I am going through the motions of participating in my daytoday life and observing rather than actively experiencing and taking part.
It's worth mentioning that once I started accepting, being more active in queer spaces, and reflecting on my past, I started remembering a lot of other things. For example, how back in my teens I had this burning wish to wake up as a girl and experience how that feels like, at least for a while. However, I brushed that off and figured, surely, everybody is curious and would like to experience being the "other gender".
I've always had very few male friends, and avoided alpha male/overly masculine presenting type of people. On the other hand, I've always had female friends, and could easily open up to them.
I've always defended and talked about trans issues in conversations with my family and group of friends. I always found it weird people had a hard time understanding the difference between sex, gender, and sexual orientation, all the while it was easy for me to understand.
There were some positive milestones as well. The first time I saw "her" in the mirror (that's how I, and a lot of transgender folks say when we mean "I saw myself in the mirror and I thought I looked like the gender I feel") was incredible.
It almost always felt amazing to try gendered stuff like clothes, makeup, nail polish, etc, for the first time. However, they relatively quickly become the norm (and that's awesome too!).
It's worth mentioning that, on most occasions when trying something from the list above, I got an erection. That is quite common the first (few) time(s). It doesn't mean it's "just a fetish" or that there's anything wrong with you.
I recall the first time I felt dysphoria. It was about a few weeks after I started questioning. I was doing the dishes, while thinking about random things. And then I looked at my hands and I noticed the hair on my arms, and I totally freaked out. I wanted to pluck it out hair by hair, burn it, do ANYTHING to remove it RIGHT THEN.
I was able to stop and relax, but it was pretty awful. And then a funny thing happened: I realised I just experienced gender dysphoria. And that made me very happy, because it means an extra data point, more validation that I am indeed transgender.
I don't experience genital dysphoria, and as such, I am not planning to have THE (or breast augmentation/facial feminization) surgery. I honestly don't care much about what kind of floppy bits I (or my partner) has. And while I haven't experienced all possible combinations, I am fairly certain they're all fun and interesting.
There are a lot of terrible guides out there for psychiatrists gatepeeking us from the medical care we need. There are places where you don't qualify as transgender unless you suffer from gender dysphoria, or you need to socially live as the gender you identify as for some time (ranges from 6 months to 2 years from what I've heard) before you can get diagnosed and get the treatment you need.
It took me less than a couple of months to be certain enough to want to begin medically transitioning, and I only had a couple of intense dysphoric events.
All of these thoughts and lists of reasons and trans checklists already seem ancient, despite being less than a year old.
I went from taking half an hour of explaining to people that I am trans, and how I prefer feminine pronouns, and what that means and essentially apologising for being a burdain, to just waving and saying my name.
Now I know I don't need a reason (let alone multiple reasons) to be a woman. I just feel like one, and I enjoy being seen as one. And that is enough.
I've learned to take care of my body, to use makeup, to dress up, to walk in high heels. I am now able to look in the mirror and usually like what I see. I even have a few pictures of myself that I love and think I look hot!
I'm not less of a woman for being attracted to women. I'm not less of a woman for having a penis, a beard (or a shadow), or a deep voice. And whether I decide to do something about any of the above is entirely my own personal decision (which may be influenced by dysphoria, preferences, etc.).
I used to not be able to go out or turn on my webcam unless I shaved, dressed girly, wore makeup and all of that. Nowdays, none of that bothers me much at all. I think it's due to multiple reasons, such as feeling more comfortable with myself, my gender expression, and trusting the people around me more.
On that subject, I did lose some friends, and grew apart from most of my family. And that's on them. I tried my best, I gave them all I could, but it shouldn't all be on me. They shouldn't make me feel as if something is wrong with me. They shouldn't make me feel as if I had some reason to be ashamed or apologise. They should be happy that I found myself and that I am happy.
I don't hate them or feel resentment, and if they ever want to get back in touch or try to repair the relationship, I'd be happy to try. I'm usually also happy to educate, but I am rarely in the mood to debate the validity of my identity.
I feel happy, I feel I am slowly finding out how I like to express my gender. And while I am fundamentally the same person, I am finally allowing myself to live, be happy, and enjoy life, and even love myself a tiny bit.
I said that one of the things that helped me was finding this meme about how most cis people don't question their gender. And I wish that wasn't true.
I think everybody should question their gender. Sure, most will find they are aligned with their assigned gender at birth, but they would learn a lot in the process. They would have a better understanding of their gender expression, have a better understanding of the other genders experiences, and probably have a little bit more empathy towards transgender people.
What would my advice be? Just try stuff out. I'm not going to pretend I understand the transmasc experience, but if you're questioning whether you might be transfemme: get your nails done, put on some makeup, try out some feminine clothes. See if it feels right. Try a feminine name for size, ask some people to use feminine pronouns when addressing you. I think that's the only way to really find out.
]]>I will try to explain how this works from the perspective of someone who is familiar with running Linux, but has little to no Nix/NixOS experience. If you're already familiar with NixOS and homemanager and just want to see my setup, the dotfiles repository README might be enough.
Nix
is a functional programming language mainly used to
power nix
, the package manager.
NixOS
uses the nix
package manager, as well
as nix
the programming language to define and manage an
entire Linux distribution, system settings, packages, etc.
The most important thing for me is being able to find the exact versions of everything on my system with reasonable certainty, as well as the ability to update them. Of course, some of it might involve some digging, but I don't mind that.
Secondly, I can easily reproduce any of my systems, entirely. My
systems run the same version of everything. I've had to reinstall. I
accidentally rm rf
ed parts of my system. I could trivially
get everything system or configurationrelated back.
It also makes it easy to have different versions of various software
installed (and even eaiser through the use of nixshell
and
direnv
/ lorri
).
Another big plus for me is the ease of system configuration. I am not an expert in running Linux systems, and having the most common system options browsable through nixopts makes it a lot easier.
It also makes trying out new things very easy (as long as it's already packaged by somebody else), including trying out new system settings. You can also trivially revert your system to a previous configuration.
However, with all the good, there's also downsides. The main problem is that it's definitley not mainstream so if you don't use relatively popular hardware, software, etc., you might end up having to dig quite a bit.
Another downside is that since NixOS
doesn't install
anything globally, you will not be able to run pretty much any binary
without nixifying it first, which at a minimum means patching its
dynamic library paths. The good news is that there's already tools to do
that. But it will still not work out of the box.
You can read a bit more about how nix
and
NixOS
work over on the official NixOS how it works
page.
When I tried nix
out, I wanted to get a good feel for
it, so I jumped straight in by buying an extra disk and installed NixOS
on it. It was fairly painless to set everything up, so a few months
later I concluded the experiment was successful and migrated it to my
main disk.
That being said, you can take it slower by installing
nix
the package
manager.
Another option is installing a NixOS virtual machine.
This post details the first option (although you can easily use the same ideas in a NixOS VM).
The NixOS installation guide is quite good and should get you up and running.
Make sure you read the X Window System section of the configuration and setup a minimal X server unless you are 100% comfortable using the console after the first reboot.
I highly recommend you add a few of the things you need to your
initial list of programs in your install
configuration.nix
:
# Uncomment this if needed (you'll know).
# nixpkgs.config.allowUnfree = true;
[
environment.systemPackages = # or 'emacs' or 'vscode' or whichever editor you prefer
# just make sure you have one
pkgs.vim
# or 'chromium' or 'googlechrome' or whatever
pkgs.firefox];
You can lookup more programs that you might need over on the NixOS package search.
Note: if a package is named haskellPackages.ghcid
, then
you'll need to add pkgs.haskellPackages.ghcid
.
Nix channels are a way to manage your system and globally installed
programs (via nixenv
). How this works is essentially, you
subscribe to a channel (say, nixos21.11
) and then you can
update to the latest released patch by doing
nixchannel update
. The problem with that is that it's
not easily reversible. Also, nixenv
isn't a very pleasant
package management experience.
The package search I mentioned earlier shows install instructions
using nixenv
. I strongly recommend against that.
Luckily, there's a pretty good alternative: we can use niv to point to specific commits in the nixpkgs repository. Those are essentially the same as channels, but we get it written down in a file we control.
We can also have multiple versions pinned for specific software if we need to.
There's a few things we need to do:
nixpkgs
shell.nix
file to make use of said pinconfiguration.nix
fileCreate a new directory, grab niv and init the repository.
mkdir nixfiles
cd nixfiles
nixshell p niv
niv init
You should now have a nix
directory with two files:
sources.nix
and sources.json
. The former
contains basic nix code to load everything in the json file, while the
latter contains the repository data including the commit SHA each
dependency is currently pinned. If you put this file under version
control, you can easily revert to a previously known working
configuration.
And now run niv show
. You should see two dependencies:
niv
itself and nixpkgs
. If
nixpkgs
is pinned to NixOS/nixpkgschannels
,
it means you have an older version of niv
and you need to
update this dependency by running
niv drop nixpkgs
niv add nixos/nixpkgs branch nixos21.11
Note: you should probably use the latest released version available
on the nixpkgs repository
instead of nixos21.11
.
Next, we need to create a shell.nix
file:
let
# Import pinned repositories
sources = import ./nix/sources.nix;
# Grab nixpkgs from there
nixpkgs = import sources.nixpkgs { config.allowUnfree = true; };
in
# Create a shell
{
nixpkgs.mkShell nativeBuildInputs = [
# grab the latest version of niv
nixpkgs.niv ];
# Force this nixpkgs to be available for commands such as
# nixshell p <package>
NIX_PATH =
"nixpkgs=${sources.nixpkgs}:nixosconfig=/etc/nixos/configuration.nix";
}
We can now launch a shell by running nixshell
. You can
check that it's working by checking your NIX_PATH
value:
nixshell
echo $NIX_PATH
And you should see something like
nixpkgs=/nix/store/ynv2jfdrw7arx3q4xjir7mn0j2w97pcysource:nixosconfig=/etc/nixos/configuration.nix
.
The easiest way to get started is to copy the nixfiles that you used during the install phase over. They should be a good place to start:
cp /etc/nixos/*.nix .
Without any additional changes, we can now rebuild everything using the new pin.
In order for all this to work, we'll need to remove the files from
/etc/nixos
and symlink to their copies:
sudo rm /etc/nixos/*
sudo ln s $PWD/configuration.nix /etc/nixos/
And now you are ready to switch to this config using the freshly created pin:
nixshell
sudo preserveenv nixosrebuild switch
You don't need to reenter the nixshell
if you have not
left the one we entered when verifying the NIX_PATH
.
That's it! You can now update to the latest commit of the current
branch by running niv update
. You can also switch branches
to a different NixOS release. Note that you should also update your
configuration.nix
's system.stateVersion
accordingly.
The above is great for setting up your system, but it's not ideal for config files and user programs. For this, homemanager is the better choice.
First off, it does not need root (sudo
) rights to change
the settings. Secondly, it has a few more options for some programs, and
some helpers around things like user services, emails, etc.
In order to get started, we'll:
homemanager
to our pinsshell.nix
accordinglyhome.nix
file to start us offWe'll start in the same directory where we created the
shell.nix
file previously, and add a pin for
homemanager
:
niv add homemanager branch release21.11
Make sure the branch release matches your NixOS version!
Next, we need to reference this in our shell.nix
:
let
# Import pinned repositories
sources = import ./nix/sources.nix;
# Grab nixpkgs from there
nixpkgs = import sources.nixpkgs { config.allowUnfree = true; };
# Grab homemanager as well
homemanager = import sources.homemanager { };
in
# Create a shell
{
nixpkgs.mkShell nativeBuildInputs = [
# grab the latest version of niv
nixpkgs.niv ];
NIX_PATH =
"nixpkgs=${sources.nixpkgs}:homemanager=${sources.homemanager}:nixosconfig=/etc/nixos/configuration.nix";
}
Note that we added homemanager
to our
NIX_PATH
. That's all we need to do for now.
Let's create a new home.nix
file. I added some ideas of
things you might want. Feel free to remove anything you don't care
about:
{
home.packages = [
pkgs.killall
pkgs.ripgrep
pkgs.wget
pkgs.unzip
pkgs.zip];
home.file = {
# If you keep these, you'll have to move the files here first.
".config/nvim/init.vim".source = ./init.vim;
".config/nvim/cocsettings.json".source = ./cocsettings.json;
};
programs = {
# Better 'cat'
bat = {
enable = true;
config = {
theme = "TwoDark";
pager = "less FR";
};
};
# Really useful for autorunning 'shell.nix', see also: lorri
direnv = {
enable = true;
enableBashIntegration = true;
enableFishIntegration = true;
};
# Better 'ls'
exa = {
enable = true;
enableAliases = true;
};
# My favorite shell
fish = {
enable = true;
package = pkgs.fish;
interactiveShellInit = ''
set fish_color_normal "#a4c337"
set fish_color_command "#77c337"
set fish_color_quote "#37c393"
set fish_color_redirection "#37b5c3"
set fish_color_end "#3776c3"
set fish_color_error "#c33759"
'';
shellAliases = {
# exa
ls = "${pkgs.exa}/bin/exa";
ll = "${pkgs.exa}/bin/exa l";
la = "${pkgs.exa}/bin/exa a";
lt = "${pkgs.exa}/bin/exa tree";
lla = "${pkgs.exa}/bin/exa la";
# git
gs = "${pkgs.git}/bin/git status";
# bat
cat = "${pkgs.bat}/bin/bat";
};
};
fzf = {
enable = true;
enableBashIntegration = true;
enableFishIntegration = true;
};
git = {
enable = true;
delta.enable = true;
aliases = {
lol = "log graph decorate oneline abbrevcommit";
lola = "log graph decorate oneline abbrevcommit all";
hist =
"log pretty=format:'%h %ad  %s%d [%an]' graph date=short";
work = "log pretty=format:'%h%x09%an%x09%ad%x09%s'";
};
extraConfig = {
init.defaultBranch = "main";
pull.ff = "only";
merge.conflictstyle = "diff3";
};
ignores = [];
userEmail = "your email here";
userName = "your name here";
};
# Should probably keep this
homemanager = {
enable = true;
};
# This makes it so that if you type the name of a program that
# isn't installed, it will tell you which package contains it.
nixindex = {
enable = true;
enableFishIntegration = true;
enableBashIntegration = true;
};
};
}
You can find all settings for homemanager on their wiki page.
Next, you'll have to symlink this file:
ln s $PWD/shell.nix $HOME/.config/nixpkgs/home.nix
The first time you install homemanager
, you'll have to
run:
nixshell '<homemanager>' A install
The same command needs to be executed if you update the pin for it.
However, if you just update your configuration file, you can simply run
homemanager switch
And that's pretty much it! You can find more inspiration in my config files.
Please note I use modules a lot which I have not covered here. I do plan to write a blog post about it later.
]]>There's usually a few signs you can look for to see if there's a significant gap in trust between upper management and engineering teams. Here's a short list of the most commonly found, in my experience:
While some of these symptoms may cause others (for example, micromanagement may increase turnover, secrecy might increase the team's resistance to change, etc.), they're all caused by one thing: lack of trust between upper management and the engineering teams.
Instead of relying solely on technical prowess, consider empowering your Engineering Managers to bridge this divide. Their role extends beyond managing projects or solving technical problems; they are instrumental in fostering a culture of trust. By actively engaging with both upper management and the engineering teams, they can facilitate open communication, align goals, and strengthen collaboration.
First and foremost, transparent and open communication is key for everything else. Regularly updating both the upper management and the engineering teams on progress, challenges and successes is essential.
Aligning goals and expectations is a close second when it comes to building trust. The goals must be in line with the upper management's vision and plans, while also being realistic and accepted by the engineering teams: alignment is a twoway street.
Recognition and acknowledgement is essential in building trust. And this also goes both ways. The Engineering Manager is responsible to make sure the engineering teams understand the essential work that upper management does, just as much as the upper management should know about the successes and achievements of the engineering teams.
A lack of trust can hinder innovation, slow down decisionmaking, and impede the overall progress of engineering initiatives. Engineering Managers, adept at navigating the technical landscape and interpersonal dynamics, can play a pivotal role in rebuilding and nurturing this essential foundation.
When trust is restored, it ripples through the organization, positively impacting morale, productivity, and ultimately, the quality of deliverables. Investing in the right leadership at the managerial level can be a strategic move to address the root cause of the disconnect.
Take a moment to consider your Engineering department's organisation. Are you looking for someone handson/tehcnical in the upper management because you don't trust your engineering teams? Is your company showing signs for lack of trust?
If so, perhaps the solution is to empower your Engineering Managers to bridge the trust gap.
]]>Dependent languages, or languages that allow typelevel natural numbers, allow you to store the size of a vector into its type:
Vector (α: Type): Nat → Type where
inductive  nil : Vector α 0
 cons: α → Vector α n → Vector α (n + 1)
If you're unfamiliar with Lean syntax, it's essentially declaring a
new type called Vector
, which takes two arguments: an
α
which is the type of the elements the vector will hold,
and a Nat
, which stands for natural numbers. The result is
a Type
, which is essentially the type of
Vector α n
, for example Vector Bool 3
is a
vector of 3 booleans.
The following two lines are the two constructors: nil
which creates a vector of size 0 with no values, and cons
,
which takes one more value (the single α
), and a vector of
size n
to create a vector of size n + 1
.
What this means is, an empty, or nil
vector will
always have a size of 0, because you can't construct it
otherwise. Similarly, whatever the size of the vector, it will always
have that many elements in it.
Here are a few quick examples:
: Vector Bool 0 := nil
def empty : Vector Bool 1 := cons true nil
def oneBool : Vector Bool 2 := cons true (cons false nil) def twoBools
If we ask for the element at some position, as long as that number is
smaller than the natural parameter of the Vector, we can always just
grab it! Lean has a type that helps with that: Fin
carries
a natural number, and the proof that it is
smaller than some other natural number,
n:
Fin (n : Nat) where
structure : Nat
val : LT.lt val n  LT stands for Lower Than; this reads as 'val < n' isLt
Which means we can now write a get
function for our
vector type:
: Fin n → Vector α n → α
def get ⟨0 , h⟩, .cons x xs ⇒ x
 ⟨i + 1, h⟩, .cons x xs ⇒ get ⟨i, Nat.le_of_succ_le_succ h⟩ xs
How this works out is:
Fin 0
because you can't create a proof
that any natural number is smaller than 0.nil
:
it's impossible to call this function on an empty vector!Fin n
;
n
can never be 0!), we'll just return the top element of
the (deconstructed) vectorget
with
i  1
(or, rather, match on i + 1
and call it
with i
)xs
now has size n  1
, we need to
also construct a Fin (n  1)
i  1 < n  1
,
given h: i < n
Nat.le_of_succ_le_succ
Fin n
and Vector α n
have the same
n
, so we can never reach 0 before running out of Vector
cons
or valuesWhew! There's a lot going in in those packed 2 lines of code.
And before we move on, I just have to share this:
length: Vector α n → Nat := λ _ ⇒ n def
Normally, with a list, we'd need to iterate through all of it to find
its length. However, since we keep track of the size of the vector in
its type, and well, Lean is a dependently typed language, we can just
grab that n
from Vector's type and return it as a
value!
It seems so natural, and yet, it's either impossible or requires quite some elaborate tricks in languages without dependent type support.
This year's Advent of Code (AoC) featured quite a few puzzles where a
Grid
type comes in handy, specifically when a rectangular
map is a reasonable way to model the problem. For example, if you wanted
to represent a cell that can either be a wall or an empty space, one can
easily write
Cell where
inductive  Wall
 Space
And there's plenty of ways to represent a map, such as:
However, I went for a 2D vector, and since not all grids (/maps) are square, it'll need both a width (x) and a height (y):
Grid (x: Nat) (y: Nat) (α: Type) where
structure data: Vector (Vector α x) y
So once I wrote this and a few helper functions, I was ready to start using it. And naturally, I wanted to write a parser to read up the input for the day's puzzle:
: Parsec (Grid x y Cell) := ... def parseInput
But, whoops. This won't work. x
and y
are
universally quantified, which means that the caller of
parseInput
gets to decide what they are. However, it's not
up to them! It's up to parseInput
to read the input file
and figure out x
and y
. So, essentially, they
need to be existentially quantified. In other words, they are
outputs and not inputs.
And well, thanks to being used with Haskell and other nondependently typed languages, I've been going for a different approach until today:
: Parsec (List (List Cell)) := ...
def parseInput
 ...
: List (List Cell)): Nat :=
def solve (inputs convert the list of lists to a grid
 and use the grid here
 in this context we know `x` and `y` so it's fine
But all the awkwardness of passing in that list of lists instead of a grid finally caught up to me, and today I spent more than a few seconds thinking about it, and well, I decided to see whether an existential type would work. I haven't gotten to use Lean that much yet, but I do remember seeing this type, which looks like what I need:
Sigma {α : Type u} (β : α → Type v) where
structure fst : α
snd : β fst
Let me explain the syntax for a bit: the curly braces mean "implicit
argument", which basically means, Lean will figure it out on its own
from the other arguments  in this case, the second argument named
β
.
And what is β
? A typelevel function that takes an
α
and returns a type.
And this sounds exactly like what we want: we have our
Grid
type which takes two naturals, but we don't want to
use them explicitly, so what if we wrote:
SomeGrid (pair: Nat × Nat): Type :=
def Grid pair.fst pair.snd Cell
This basically says, give me a pair of natural numbers, and I'll give
you a type. That type is a Grid
where the x
is
the first part of the pair, and the y
is the second. And
yes, since types and values live at the same level, we don't need
special syntax to define typelevel functions: we can write it like any
other function.
And now, we can write our parser like this:
: Parsec (Sigma SomeGrid) := ... def parseGrid
... and the awesome thing is, we can grab any
Sigma SomeGrid
value and use it as a regular pair:
fst
argument will just be a pair of natural numbers
which represent the size of the gridsnd
argument is our Grid x y Cell
I'll admit I was rather surprised to see that it all worked as simply as I expected, without any surprises or hardtoread typelevel errors. It might just be the case that my Haskell experience trying to play with dependent types has scared me a bit too much, so I'm looking forward to using Lean a bit more!
And since I've mentioned it, my full advent of code solutions up to today (day 22) are on github: https://github.com/eviefp/lean4aoc2023
]]>