Fibers

All ordinary Crow code runs in a fiber.
A fiber is like a thread, but implemented by the Crow runtime instead of the operating system. The term thread always refers to a native thread.
The fiber type is internal to the Crow runtime; you only deal with it indirectly.

Fibers let you:

  • Do "blocking" operations, such as network requests, without using callbacks.
  • Write separate code for procedures which at runtime will actually run interleaved.
  • Run code in parallel to get work done in less time (by using more processor cores).
Threads accomplish all of these too, but fibers use less resources (so you can have more of them), and have features to make parallel code safer.

A fiber is a logical series of operations that don't necessarily correspond to sequential machine instructions.
A fiber might delay for a while, or wait for an event like a keypress.
For example, 3.second delay does not literally freeze the thread for 3 seconds; the thread does other work while the fiber is set aside for 3 seconds.
When a fiber is delayed for whatever reason, it is said to yield.

The fiber queue

The runtime maintains a queue of runnable fibers, called the fiber queue.
This contains immediately runnable fibers only; fibers that are blocked are handled in various ways. (So, the only reason for the queue to remain non-empty is if there are more runnable fibers than available threads.)
The core runtime loop for each thread is to take a fiber off the queue, run it until it yields, do some bookkeeping, then repeat.
When the Crow runtime starts, it launches a thread for each processor, and creates the first fiber that will run main.

Launching new fibers

The common way to launch a new fiber is to use with : parallel. This returns a future for its result.
A future represents a value that may not be ready yet.

await is a function that yields the current fiber (in this case, the one running main) and adds it to a waiting list for the future.
(await does not cause the other fiber to run; that was already started by with : parallel. It just observes the result.)
When the future is resolved (in this case, when the code inside with : parallel completes), the awaiting fiber is added to the fibers queue, meaning it will resume as soon as a thread is available.

main void() fut nat future = with : parallel info log "Computing value" 7 info log "Waiting for value..." info log "{fut await}"

More parallel fibers

parallel has more useful functions for running code in parallel. For example, it includes a parallel for loop. This creates a fiber for each element of a collection.

main void() squares nat[] = for x : 1::nat .. 10 parallel x * x info log "{squares.to::json}"

Exclusion

A fiber can't share mutable state with other fibers running in parallel. This is the thread-safety problem.
So, with : parallel and similar functions take shared lambdas as arguments. That means they can only close over values that are safe to share with parallel fibers.

There is a way to have multiple fibers that share mutable state: give them the same exclusion.
An exclusion is just a number associated with each fiber. The fibers queue will only dequeue a fiber if its exclusion isn't already in use, meaning two fibers with the same exclusion will never run at the same time.

Like fiber, you won't reference the exclusion type directly. with : later works like with : parallel but reuses the calling fiber's exclusion.

main void() xs string mut[] = () f0 void future = with : later xs ~= "hello" f1 void future = with : later xs ~= "world" # await' order is not execution order; 'f0' runs first. f1 await f0 await info log xs.to::json.to

In this example there are 3 fibers: One for main, and one for each later.
Since all 3 fibers share the same exclusion, the calls to ~= do not overlap (which would be bad).

Both later and parallel create a new fiber, but parallel uses a new exclusion (meaning, one never seen before) while later shares the exclusion of its caller.
So, later takes a mut lambda (meaning the closure can include anything), while parallel needs a shared one (meaning the closure can only include shared types).
Try changing later to parallel above to see the compile error.

Share any lambda

If some library you are using expects a shared lambda, worry not; you can make any lambda shared just by putting the shared keyword before it.

main void() parts string mut[] = () f void shared(x string) = shared x => parts ~= x () for x : ("hello", "world")::string[] parallel f[x] info log (", " join parts.to)

shared wraps the lambda with code that changes its calling fiber's exclusion to the exclusion of the fiber that created the lambda (yielding until it's available), and changes it back after calling.
That means the above example is not really running in parallel since there is only one call to f at a time.
However, they could happen in any order, so, it is possible that it might log "world, hello". (This won't happen in the browser which is single-threaded.)

When writing code that returns a lambda, you should usually make it shared to make it easier for the user.

Futures and exceptions

When a fiber throws an exception, the future is completed with that exception and will be thrown again at await.

main void() xs string[] = "hello", fut string future = with : parallel xs[1] info log "this is reached" x = fut await info log "{x} is not reached" ()