Automatic JS / JSON encoding/decoding to Reason data?


#1

Hi!

Is there a way to automatically generate Json decoders from type definitions to reduce boilerplate on interop and HTTP requests code?

I would still like to validate it to avoid runtime exceptions, which the Bucklescript converters don’t seem to do.

Cheers!


#2

There is Milk but it’s pretty new so we’re waiting for it to stabilize a bit and for Jared to officially announce it.

In the meantime I recommend using bs-json and wrapping the unsafe functions into versions which return a Belt.Result.t type, e.g.

// JsonResult.re
let parse(data) = ...;
let decode(decoder, json) = ...;

Then you can use these safe versions like

let line = JsonResult.(
  data->parse->Belt.Result.flatMap(decode(Decode.line))
);

#3

Modeling application data structures after server response data is bad practice. Not just because OCaml/Reason data types don’t map all that well to JSON, but you usually don’t want it in the same shape anyway and synchronizing changes to the shape of server data with the front-end isn’t trivial. The purpose of writing JSON decoders isn’t just to validate the JSON, but also to transform the data without using intermediate data structures.

I see Milk has migrations which support at least some changes to the shape of the data, so maybe that’s more feasible. But now you’ve still added a bunch of complexity to your build process, a dependency on a complicated piece of third=party code which may or may not continue to be supported, you have to learn about a bunch of magic annotations that aren’t applicable to anything else, and have probably added a significant amount of runtime code to support all the fancy features you don’t need. And you STILL need to write transformation code to get the data into a shape that is appropriate for your application needs.

So my assessment is: Just don’t do it. I really doubt it’s going to be worth it. Plain JSON decoders are simple, light and really not all that boilerplate-y when you start dealing with complex data structures that change over time.


#4

Thanks for the thoughtful response. I am coming from Elm and to be honest 95% of decoding has been always mindless boilerplate that could be automated.

Of course decoders are great when the data doesn’t match exactly the types and needs massaging, but for many programs that is not the case.

For example, look at serde in Rust and how nice, easy and safe it is. Almost everyone is using it there.

Reason hits a sweet spot with interop and how it matches to JS semantics, but if the choice is easy but prone to runtime errors, or verbose but runtime type safe, I’m not sure what the advantages are over Typescript (easy but runtime unsafe) or Elm (hard but runtime safe).

I thought with macros this would be something already solved but it seems there is a gap to cover there. It seems like Milk could be the solution, although IMO some bucklescript tag like the converters would be easiest.

Thanks for the responses :pray:


#5

Coming from Scala, I’m also used to its powerful and flexible macros for generating JSON codecs. IMHO it’s best to treat these serialized data as part of the service layer data model for communicating with the outside world, and not as part of the app’s core data model. In other words, data transfer objects (DTOs).

DTOs can be converted into the app’s core data types. It’s less error-prone, and easier for newcomers to the codebase, to write conversion code in the programming language’s basic syntax and semantics than to have to learn a new, specialized JSON library.

Imagine that your service layer data types model exactly the JSON that you’re getting from the outside world, and can change as necessary in a decoupled way from your core data model. You’d just change the service layer type definitions, and the compiler would guide you through changing the conversion code, through type errors. That’s an efficient workflow. In fact it’s almost exactly the workflow of using Thrift, Protobuf, etc. and codegen for those encodings.


#6

I don’t know much about how serde is used in Rust, or what the alternatives are. But I do know how this mindset has affected ASP.NET developers.

The canonical ASP.NET application has a graph of entity types which maps more or less one-to-one to database tables. With the help of a few annotations, entity objects can then be serialized and deserialized autmaotically by an overcomplicated ORM-framework (which is a concept with a whole lot of problems in itself, but let’s not go there). On the other end there’s a set of DTO objects that model the server responses, and which are automatically serialized to JSON. To get from entity objects to DTOs you need to copy a whole bunch of data that is mostly, but not entirely, of the same shape. To avoid all that boilerplate you’d probably use some object-mapping library to create mappings with complicated rules. And in addition to this you need to convert to and from any application-specific models you might have. All of this instead of having just an application model populated by specific database queries and serialized directly to JSON via specific mappings. This design has come about because of the dream of convenient automatic serialization and deserialization, ignoring the fact that what you get usually isn’t what you actually want or need because the shape of the data has been optimized for some other purpose.

The problem that keeps this myth going is that if you’re used to this design, and so many are, then it really IS convenient. Because you’re comparing it to populating the entity objects and DTOs by hand. But if you look at what your application actually needs, and avoid putting entity objects into generic global stores and such, I think you’ll find that writing JSON decoders manually might lead to LESS boilerplate overall. And your component/view hierarchy might turn out better too.

Finally, one last note: Libraries and tools for automatic serialization and deserialization are usually very specific. They apply only to a single language, and usually only a single format. Once you move on to a different language, or a different data format becomes in vogue (remember XML? That one was fun…), you’ll have to use a different library or tool that probably uses a completely different API. Meanwhile, SQL is applicable to any relational database and has been for many decades. And most JSON decoders follow a combinator pattern that you’ll see again and again in functional programming. It’s not very specialized at all. Once you’ve learned these technologies you can apply them pretty much forever. That’s powerful knowledge.


#7

Hey @joakin as an alternative to the solutions mentioned above, there is also atdgen, which generates encoders, decoders and type definitions based on the declarations specified in .atd files. It can auto-generate them for OCaml, BuckleScript, Java, and Scala (early support). It’s battle tested and currently used in production by companies like Ahrefs. The main downside is that the syntax of atd files is something new to learn, although it’s pretty similar to OCaml.

If you want to learn about it, we just did a workshop in ReasonConf with a small BuckleScript server/client application. The code is available in https://github.com/ahrefs/atdgen-workshop-starter/. If you’re already familiar with bs-json you can go directly to milestone 2. You will also find branches with solutions for each milestone. Finally, there’s a blog post if you want to know more about atdgen and BuckleScript.

If you have questions, feel free to reach out here or on the Reason Discord channel (jchavarri as well).


#8

Since we’re talking codegen from external type definitions, let me also point out for completeness that ocaml-protoc can generate JSON codecs for BuckleScript, Yojson, and the binary protocol format for Protobuf syntax 3. It’s available on opam and is a single command-line tool which can be integrated into your build.


#9

Also for completeness, if you’re coming from Elm and feel like your Reason code doesn’t have enough boilerplate :wink:, you could take a look at ocaml-decoders.


#10

Try https://github.com/ryb73/ppx_decco (I just discovered it). It auto-generates encoders and decoders from record type definitions. Can handle optional fields and field renaming, among other things. Looks pretty powerful.


#11

@yawaramin that’s exactly what I was looking for. Particularly for safe JS interop. I will give it a try!