Smart constructors and inversion of control in ReasonML. Objects, modules or types?

reasonreact

#1

Hello,

What is the best practice for recreating newtypes and smart constructors in Reason that we find in Elm and Haskell? For example we want to have a type Email that is internally a string but can only be created using a create method and that hides the internal type to ensure it’s correctly formatted. Intuitively it seems that one could use modules or objects, is one preferred over the other?

Another related question. Say we want to invert the control of our effects by hiding an interface, so we have a IDatabase that contains a get and delete function. We then want to instance it with an effectful postgres version and a mocked version using a hashmap for integration testing/local testing. Is it best to use modules, object or types for this?


#2

You can use an abstract type in place of a newtype to hide the implementation. For example:

module Email : {
  type t

  let make : string => option(t)

} = {
  type t = string

  let make = email =>
    isValid(email) ? Some(email) : None
}

For dependency injection you can use functors or pass around a first-class module, for example. But I don’t think copying the OO approach to dependency injection is a great idea in a functional language. You might want to try a more functional approach to architecting your application instead. See for example this great article by Scott Wlaschin on Functional approaches to dependency injection.


#3

Thanks, that’s just what I was looking for. So modules are the way to go.

The article is great as well. So rather than using interfaces (or records) where we can have functions that are unused, we should just pass all arguments that a particular function needs explicitly.


#4

Yeah, that’s part of it. But I think the more important takeaway is that you should design your functions to be composable and pure (i.e. free of side-effects). Instead of perceiving the bulk of your application as an imperative machine that is manipulated by taking various actions, use a data-driven approach to describe intent. That way you won’t need to mock anything, you can just compare the description produced directly against what’s expected.


#5

I see now that this article doesn’t go as deep into it as I seemed to recall. There’s a series of articles by Mark Seemann however, which goes quite a bit deeper, that I was confusing it with.


#6

Right, another interesting read that I admit will need to look at some more. However I’m not sure that the suggested solution, free monads, is that good as they have a big performance penalty. I also don’t really see the difference between passing partial functions and mocking and something like MTL from Haskell as the typeclasses are essentially being used as dependency injection there as well. The article you link argues that it isn’t somehow a functional approach but I’m not sure where the problem is in this approach yet.


#7

You may not need a heavy-weight approach. In my experience most effectful computations nowadays are naturally in some kind of ‘effect manager’ type like Js.Promise.t or Lwt.t. These denote computations which may complete after a delay or may fail. And ‘pure’ computations are not in a promise context.

My heuristic for keeping a pure codebase is to try to minimize the former and maximize the latter. One strategy is to use a layered architecture, where you have a pure domain model and business logic layer; above that a service layer which does effectful things; and above that the controller layer which coordinates and runs the effects overall.

The trick then becomes to keep the domain layer pure in the presence of service-layer effects which mean the actual decisions need to be made at runtime as the effects run. One simple example of this is to calculate a total invoice amount from lineitems, e.g. invoiceTotal(lineitems, tax, discount), after running service-layer effects like getTax(location) and getDiscount(customer). So this is where your controller layer comes in–it makes the effectful service calls and monadically passes the results to the pure calculation. So in terms of the (Reason/BuckleScript) code:

Js.Promise.(
  all2(getTax(location), getDiscount(customer))
  |> then_((tax, discount) => invoiceTotal(lineitems, tax, discount))
);

Now you can easily unit-test the pure invoiceTotal calculation for different inputs. As for the service and controller layers: imho it’s better to test those in integration instead of mocking service calls.

I hope that makes sense!


#8

I realize I might go a bit off topic now but if we do integration tests using Quickcheck we essentially have to create a mocked version anyway. A good example is testing a file-system. If we have mocked all services for our integration tests and ensured that the mocked service and effectful one share the same properties then would it not be safe to use those mocks for E2E tests? If so for E2E tests we want to set up the entire system with all the dependencies mocked anyway. Am I missing something here, is this too much work in practice?


#9

When I say ‘integration tests’ I’m talking about something like Cypress that’s carrying out actual clicks and other actions against your frontend, in an integration environment with QA instances of your services.

Imho, mocking service responses for something like QuickCheck is trying to shoehorn integration testing into something that’s meant for unit or property testing. It will increase the amount of sheer administrative work you have to do in managing mock data for all these services and slow you down considerably. Also if you’re doing the kind of integration testing I described above (which you really should), it will also massively increase duplication across your tests with little or no benefit.

My advice: keep unit tests in your codebase and set up integration tests with a proper QA plan.


#10

Okay I have some more questions but I think it’s too off topic at this point so I’ll ask them in another forum.


#11

I’m not sure that the suggested solution, free monads, is that good as they have a big performance penalty.

Yeah, sorry, I didn’t mean to suggest delving into free monads and such. Not for performance reasons, but because I don’t think such a level of abstract-ness is particularly productive. It’s usually better to use more purpose-built constructs IMO.

Take the Elm Architecture as an example. Its Msg type is basically a description of high-level intent, that is translated into state transitions and low-level commands, which are also descriptions telling the runtime what to do, by the update function. No monad in sight, and yet completely pure. The devil is in the runtime of course, which isn’t implemented in Elm itself, but it would be fairly easy to implement in Reason.

I also don’t really see the difference between passing partial functions and mocking and something like MTL from Haskell as the typeclasses are essentially being used as dependency injection there as well.

I’m not all that sophisticated with Haskell, so I might not understand all the implications. But from what do I understand, using type classes is essentially the same as passing first-class modules around in Reason (instead of separate functions), except it’s done implicitly. That implicitness makes it a very convenient approach in Haskell (but also tends to make the code harder to understand). Using a (module) functor is a middle ground where you pass those modules (as analogues to type class instances) as arguments at the module level instead of at the function level. But since you can’t define functors alongside toplevel modules, that approach isn’t very convenient either.

The article you link argues that it isn’t somehow a functional approach but I’m not sure where the problem is in this approach yet.

Since Haskell is inherently pure and every side-effect encapsulated in a monad already I don’t think there really is a difference in that sense. But in an impure language, unless you create your own IO monad and interpreter to push side-effects to the boundary, the “dependencies” you pass in are impure and will make every function that uses them, directly or indirectly, impure as well.

In the context of testing, the difference is that you have to mock impure dependencies, whereas you can inspect and compare the return value of pure functions directly. As long as you have access to the content of those values at least, which I suppose you don’t in Haskell, hence the need for mocking. Both the tests and the functions themselves become simpler and easier to reason about when they’re pure, and if you don’t have any dependencies you don’t have to pass them around either.