ChatGPT解决这个技术问题 Extra ChatGPT

Monads with Join() instead of Bind()

Monads are usually explained in turns of return and bind. However, I gather you can also implement bind in terms of join (and fmap?)

In programming languages lacking first-class functions, bind is excruciatingly awkward to use. join, on the other hand, looks quite easy.

I'm not completely sure I understand how join works, however. Obviously, it has the [Haskell] type

join :: Monad m => m (m x) -> m x

For the list monad, this is trivially and obviously concat. But for a general monad, what, operationally, does this method actually do? I see what it does to the type signatures, but I'm trying to figure out how I'd write something like this in, say, Java or similar.

(Actually, that's easy: I wouldn't. Because generics is broken. ;-) But in principle the question still stands...)

Oops. It looks like this has been asked before:

Monad join function

Could somebody sketch out some implementations of common monads using return, fmap and join? (I.e., not mentioning >>= at all.) I think perhaps that might help it to sink in to my dumb brain...

Does this snippet help? join m = m >>= id
@DanielWagner I know how to define join. At least, I know the incantation that yields the correct type signature. But I'm struggling a little to wrap my mind around what that literally does...
@MathematicalOrchid: There isn't really a general definition of join, in the same way that there isn't a general definition of bind. Both of them define what the monad does! Once you have defined them, you've picked out a particular monad.
For whatever it is worth to you, here are some join/cojoin implementations in Java: * reader monad * list monad * rose-tree comonad * list-zipper comonad PS: I suck arse at formatting, sorry.
@MathematicalOrchid Thanks. So I think you mean that concat is one easily spotted join candidate (given some rudimentary Haskell knowledge) rather than the (i.e., one & only) possibility. I raised my question out of concern that readers might take your remark literally and believe it to be true, thus missing some illuminating lines of inquiry. For instance, does concat satisfy the required monad laws? Do other candidates? If yes & no (respectively), how could one derive concat from the laws? I use such questions to lead me away from the "obvious", i.e., my blind spots.

p
pigworker

Without plumbing the depths of metaphor, might I suggest to read a typical monad m as "strategy to produce a", so the type m value is a first class "strategy to produce a value". Different notions of computation or external interaction require different types of strategy, but the general notion requires some regular structure to make sense:

if you already have a value, then you have a strategy to produce a value (return :: v -> m v) consisting of nothing other than producing the value that you have;

if you have a function which transforms one sort of value into another, you can lift it to strategies (fmap :: (v -> u) -> m v -> m u) just by waiting for the strategy to deliver its value, then transforming it;

if you have a strategy to produce a strategy to produce a value, then you can construct a strategy to produce a value (join :: m (m v) -> m v) which follows the outer strategy until it produces the inner strategy, then follows that inner strategy all the way to a value.

Let's have an example: leaf-labelled binary trees...

data Tree v = Leaf v | Node (Tree v) (Tree v)

...represent strategies to produce stuff by tossing a coin. If the strategy is Leaf v, there's your v; if the strategy is Node h t, you toss a coin and continue by strategy h if the coin shows "heads", t if it's "tails".

instance Monad Tree where
  return = Leaf

A strategy-producing strategy is a tree with tree-labelled leaves: in place of each such leaf, we can just graft in the tree which labels it...

  join (Leaf tree) = tree
  join (Node h t)  = Node (join h) (join t)

...and of course we have fmap which just relabels leaves.

instance Functor Tree where
  fmap f (Leaf x)    = Leaf (f x)
  fmap f (Node h t)  = Node (fmap f h) (fmap f t)

Here's an strategy to produce a strategy to produce an Int.

https://i.stack.imgur.com/7ts7a.jpg

Toss a coin: if it's "heads", toss another coin to decide between two strategies (producing, respectively, "toss a coin for producing 0 or producing 1" or "produce 2"); if it's "tails" produce a third ("toss a coin for producing 3 or tossing a coin for 4 or 5").

That clearly joins up to make a strategy producing an Int.

https://i.stack.imgur.com/Dw5vO.jpg

What we're making use of is the fact that a "strategy to produce a value" can itself be seen as a value. In Haskell, the embedding of strategies as values is silent, but in English, I use quotation marks to distinguish using a strategy from just talking about it. The join operator expresses the strategy "somehow produce then follow a strategy", or "if you are told a strategy, you may then use it".

(Meta. I'm not sure whether this "strategy" approach is a suitably generic way to think about monads and the value/computation distinction, or whether it's just another crummy metaphor. I do find leaf-labelled tree-like types a useful source of intuition, which is perhaps not a surprise as they're the free monads, with just enough structure to be monads at all, but no more.)

PS The type of "bind"

(>>=) :: m v -> (v -> m w) -> m w

says "if you have a strategy to produce a v, and for each v a follow-on strategy to produce a w, then you have a strategy to produce a w". How can we capture that in terms of join?

mv >>= v2mw = join (fmap v2mw mv)

We can relabel our v-producing strategy by v2mw, producing instead of each v value the w-producing strategy which follows on from it — ready to join!


@Jamil if you found this helpful, you should check out the article Monads Are Trees With Grafting by Dan Piponi. It shows how you can view all monads in this way.
@Jamil (Thanks!) In the list monad, a "strategy to produce a value" consists of offering a choice of values. concat turns a choice between choices of things into a choice of things, like turning a street full of shops (first choose shop, then choose stuff) into one big supermarket.
Yes it does. We have v2mw :: v -> m w and fmap :: (a -> b) -> m a -> m b so fmap v2mw typechecks with a = v and b = m w giving type m v -> m (m w), which is why the join afterwards gives you m w.
As far as natural language metaphors go "strategy to produce a value" sounds less cluncky than "a value in a context".
Chris Taylor's link was from Dan Piponi's blog. The link is dead now. Dan Piponi gives an updated link.
D
Daniel Wagner
join = concat -- []
join f = \x -> f x x -- (e ->)
join f = \s -> let (f', s') = f s in f' s' -- State
join (Just (Just a)) = Just a; join _ = Nothing -- Maybe
join (Identity (Identity a)) = Identity a -- Identity
join (Right (Right a)) = Right a; join (Right (Left e)) = Left e; 
                                  join (Left e) = Left e -- Either
join ((a, m), m') = (a, m' `mappend` m) -- Writer
-- N.B. there is a non-newtype-wrapped Monad instance for tuples that
-- behaves like the Writer instance, but with the tuple order swapped
join f = \k -> f (\f' -> f' k) -- Cont

Well, it's what I asked for. ;-) It's a bit... dense to really comprehend though. Maybe if I look at it really slowly or something...
W
Will Ness

Calling fmap (f :: a -> m b) (x ::ma) produces values (y ::m(m b)) so it is a very natural thing to use join to get back values (z :: m b).

Then bind is defined simply as bind ma f = join (fmap f ma), thus achieving the Kleisly compositionality of functions of (:: a -> m b) variety, which is what it is really all about:

ma `bind` (f >=> g) = (ma `bind` f) `bind` g              -- bind = (>>=)
                    = (`bind` g) . (`bind` f) $ ma 
                    = join . fmap g . join . fmap f $ ma

And so, with flip bind = (=<<), we have

    ((g <=< f) =<<)  =  (g =<<) . (f =<<)  =  join . (g <$>) . join . (f <$>)

https://i.stack.imgur.com/G5uCl.png


sidenote: (return . f' =<<) = (f' <$>) = join . (return . f' <$>). (g' <$>) . (f' <$>) = (return . g' =<<) . (return . f' =<<) = (return . g') <=< (return . f'). (NB f' and g' here are "horizontal", not "diagonal".)
Y
Yuval Itzchakov

OK, so it's not really good form to answer your own question, but I'm going to note down my thinking in case it enlightens anybody else. (I doubt it...)

If a monad can be thought of as a "container", then both return and join have pretty obvious semantics. return generates a 1-element container, and join turns a container of containers into a single container. Nothing hard about that.

So let us focus on monads which are more naturally thought of as "actions". In that case, m x is some sort of action which yields a value of type x when you "execute" it. return x does nothing special, and then yields x. fmap f takes an action that yields an x, and constructs an action that computes x and then applies f to it, and returns the result. So far, so good.

It's fairly obvious that if f itself generates an action, then what you end up with is m (m x). That is, an action that computes another action. In a way, that's maybe even simpler to wrap your mind around than the >>= function which takes an action and a "function that produces an action" and so on.

So, logically speaking, it seems join would run the first action, take the action it produces, and then run that. (Or rather, join would return an action that does what I just described, if you want to split hairs.)

That seems to be the central idea. To implement join, you want to run an action, which then gives you another action, and then you run that. (Whatever "run" happens to mean for this particular monad.)

Given this insight, I can take a stab at writing some join implementations:

join Nothing = Nothing
join (Just mx) = mx

If the outer action is Nothing, return Nothing, else return the inner action. Then again, Maybe is more of a container than an action, so let's try something else...

newtype Reader s x = Reader (s -> x)

join (Reader f) = Reader (\ s -> let Reader g = f s in g s)

That was... painless. A Reader is really just a function that takes a global state and only then returns its result. So to unstack, you apply the global state to the outer action, which returns a new Reader. You then apply the state to this inner function as well.

In a way, it's perhaps easier than the usual way:

Reader f >>= g = Reader (\ s -> let x = f s in g x)

Now, which one is the reader function, and which one is the function that computes the next reader...?

Now let's try the good old State monad. Here every function takes an initial state as input but also returns a new state along with its output.

data State s x = State (s -> (s, x))

join (State f) = State (\ s0 -> let (s1, State g) = f s0 in g s1)

That wasn't too hard. It's basically run followed by run.

I'm going to stop typing now. Feel free to point out all the glitches and typos in my examples... :-/


I only spotted one error - lacking parens in join (Just mx). Also, it's not bad form to answer your own question, cf. blog.stackoverflow.com/2011/07/…
s
solrize

I've found many explanations of monads that say "you don't have to know anything about category theory, really, just think of monads as burritos / space suits / whatever".

Really, the article that demystified monads for me just said what categories were, described monads (including join and bind) in terms of categories, and didn't bother with any bogus metaphors:

http://en.wikibooks.org/wiki/Haskell/Category_theory

I think the article is very readable without much math knowledge required.


r
rdm

Asking what a type signature in Haskell does is rather like asking what an interface in Java does.

It, in some literal sense, "doesn't". (Though, of course, you will typically have some sort of purpose associated with it, that's mostly in your mind, and mostly not in the implementation.)

In both cases you are declaring legal sequences of symbols in the language which will be used in later definitions.

Of course, in Java, I suppose you could say that an interface corresponds to a type signature which is going to be implemented literally in the VM. You can get some polymorphism this way -- you can define a name that accepts an interface, and you can provide a different definition for the name which accepts a different interface. Something similar happens in Haskell, where you can provide a declaration for a name which accepts one type and then another declaration for that name which treats a different type.


D
Dagang

This is Monad explained in one picture. The 2 functions in the green category are not composable, when being mapped to the blue category with join . fmap (strictly speaking, they are one category), they become composable. Monad is about turning a function of type T -> Monad<U> into a function of type Monad<T> -> Monad<U>.

https://i.stack.imgur.com/wOlfQ.png