So there are two answers to that, one is “in the future, as designed” and one is “right now, in alpha version”.
- a. In the future: There should be almost no overhead - they’re almost all simple wrapper functions, and the compiler should be able to optimize them all to nothing (I dont know if that is in fact the case though). We deliberately wrap the platform-native types, as those library writers have spent time on this.
b. Now: Most of the code is really quickly written, hacked together, and will not do well in a benchmark. If you look at the code, you’ll see we often convert something to a list and back to an array to use whatever functions Belt had available. However, while they could certainly be improved, we haven’t seen any slowdown in our app. We will certainly be improving them, and welcome contributions to help with this. - The goal of Tablecloth is to not have to think about the differences between Belt, Core, Base, and other libs you app may use like containers, batteries, etc). If we need new high-level functionality and it isn’t in Belt, we should add it to tablecloth. If some functions we’re building already exist in Belt, use them. I haven’t got a sense of what it’s like to contribute to Belt, as my goals were different than Belt’s.
- I haven’t seen this happen in practice, and I don’t have a plan here.
- Por que no los dos?