Setting up coverage reports on TFS with OpenCover

Code coverage is a metric which indicates the percentage of volume of your source code covered by your tests. It is
certainly a good idea to have code coverage reports generated as part of Continuous Integration – it allows you to keep track of quality of your tests or even set requirements for your builds to have a certain coverage.

Code coverage in Visual Studio is only available in the Enterprise edition. Fortunately, thanks to OpenCover you can still generate coverage reports even if you don’t have access to the Enterprise license.

In this article I will show you how to configure a Build Definition on Team Foundation Server 2015/2017 to use OpenCover to produce code coverage reports.

Preparations

We are going to put some files on TFS. We will need:

  • RunOpenCover.ps1 – PowerShell script that will run OpenCover – we are going to write it in a moment
  • vsts-task-lib – a PowerShell script library which provides some helpful util functions
  • OpenCover executable
  • OpenCoverToCoberturaConverter – a tool to convert the report to a format understandable by Visual Studio
  • (optional) ReportGenerator – a tool do generate HTML reports

The last three items are available as NuGet packages. I suggest organizing all these files into the following directory structure:

Once done, check it in to your TFS instance.

I’ve put the BuildTools directory on the top level of the repository. Next, I’ve added a mapping to my Build Definition in order to make that directory available during the build.

Create the PowerShell script

Let’s now write the PowerShell script. The script is going to perform a couple of steps:

  • We would like our script to use a file pattern to scan for test assemblies in the same way that the “native” Visual Studio Tests task does. For that, we can use Find-Files cmdlet available in vsts-task-lib.
  • Next, we run OpenCover and use the list of paths with test assemblies as parameters.
  • Next, we need to convert the results file produced by OpenCover to Cobertura – a file format which TFS can understand.
  • Finally, we can use the same results file to produce an HTML, human-readable report.

The script will take a couple of parameters as input:

Next, let’s run the Find-Files utility to search against the pattern defined in $testAssembly. This code is copied from the original Run Visual Studio Tests task source code.

We can finally run OpenCover. The command to do this is pretty complicated. OpenCover supports different test runners (VSTest being only one of them) so we need to specify the path to VSTest as one of the arguments. The path below ( %VS140COMNTOOLS%\..\IDE\CommonExtensions\Microsoft\TestWindow\vstest.console.exe ) is valid for Visual Studio 2015 installation.

Another important argument is -mergebyhash . It forces OpenCover to treat assemblies with the same hash as one. I’ve spent a few hours figuring out why my coverage score is so low. It turned out that OpenCover analyzed few copies of the same assembly.

Next, let’s convert the results generated by OpenCover to Cobertura format.

Finally, we will generate a HTML report based on the results from OpenCover.

And that’s it.

Configure the Build Definition

We will need to add three build steps to our Build Definition. If you have a Visual Studio Tests task in it, remove it – you will no longer need it.

  • PowerShell task – set the Script Path to point to RunOpenCover.ps1 and specify the Arguments:

  • Publish Test Results task – configure it as on the image below; as a by-product of generating coverage reports, we produce test results – we need to tell TFS where to find them

  • Publish Code Coverage Results task – configure it as on the image below; thanks to this task the results will be visible  on the build summary page

And that’s it! Run the build definition and enjoy your code coverage results. You can find the on the build summary page. The HTML report is available as one of the build artifacts.

Understand monads with LINQ

This post is another attempt on explaining the M word in an approachable way. This explanation will best suite C# developers who are familiar with LINQ and query expressions. However, if you are not familiar with C# but would like to learn how powerful and expressive some of its features are, please read on!

Recap of LINQ and query expressions

LINQ is a technology introduced in C# 3.0 and .NET 3.5. One of its major applications is processing collections in an elegant, declarative way.

Here’s an example of LINQ’s select expression:

Query expressions are one of the language features which constitute LINQ. Thanks to it LINQ expressions can look in a way which resembles SQL expressions:

Before LINQ you would need to write a horrible, imperative loop which literates over the numbers array and appends the results to a new array.

Single element collection: Maybe class

It’s pretty easy to understand what select expression does in the above example: it apples a given expression to each element of a collection and produces a collection containing the results.

Let’s now imagine that instead of arbitrary collection, we are working with a special kind of collection – one that can have either one element or no elements at all. In other words, it’s either empty, or full.

How should select expression act on such a collection? Exactly the same way that it works with regular collections. If our collection has one element than apply the given expression to it and return a new collection with the result. If the collection is empty, just return an empty collection.

Note that such a special collection is actually quite interesting – it represents an object that either has a value or is empty. Let’s create such an object and call it Maybe.

Let’s create two factory methods to allow more convenient creation of instances of Maybe.

Thanks to type inference in generic method calls and the static using feature we can now simply write:

Making Maybe LINQ-friendly

Since we’ve already discussed how select would work on Maybe, let’s implement it!

Adding support for query expressions to your custom types is surprisingly easy. You just need to define a method which confirms to a specific signature (it’s an interesting design decision by C# creators which allows more flexibility than requiring the type to implement a specific interface).

What’s going on here? Firstly, let’s take a look at the signature. Our method takes a function which transforms the value contained by Maybe to another type.  It returns an instance of Maybe containing an instance of the result type.

If it’s confusing, just replace Maybe with List or IEnumerable. It makes perfect sense to write a select expression which transforms a list of ints to a list of strings. It works the same way with our Maybe type.

Now, the implementation. There are two cases:

  • If the object contains a value than apply the mapper function and return a new Maybe instance with the result
  • If the object is empty, there is nothing to convert – return a new empty Maybe instance

Let’s give it a try:

Nice! We can now use select expressions with Maybe type.

Taking it one step further

Let’s now imagine that given an employee’s id, our goal is to return the name of theirs supervisor’s supervisor. A person can but does not have to have a supervisor. We are given a repository class with the following method:

And a Person class:

In order to find the person’s supervisor’s supervisor’s name we would need to write a series of if statements:

Can we improve this code with our new Maybe type? Of course we can! First of all, since Maybe represents a value which may or may not exist, it seems reasonable for GetPersonById to return Maybe<Person> instead of Person.

Next, let’s modify the Person class. Since a person can either have or not have a supervisor, it’s again a good fit for the Maybe type:

Given these modifications we can now rewrite GetSupervisorSupervisorName in a neater and more elegant way:

Why is this better than the previous version? First of all, we explicitly represent the fact that given a person, the method might or might not return a valid result. Previously, the method always returned a string. There was no way to indicate that it can sometimes return null (apart from a comment). A user of such a method could forget to perform null check and in consequence be surprised by a runtime error.

What’s more, we avoid the nesting of if statements. In this example we only go two levels deep. What if there were 5 levels? Code without these nested if statements is much cleaner and more readable. It expresses the actual logic, not on the boilerplate of null-checking.

Making it work

If you’re copying these snippets to Visual Studio, you might have noticed that the last one won’t compile.

By implementing Select we told the compiler how to apply functions to values inside Maybe instances. However, here we have a slightly more complex situation. We take a value which sits inside a Maybe instance and apply a function to it. As a result we get another Maybe instance, so now we have a Maybe inside a Maybe. The compiler doesn’t know how to handle this situation and we need to tell it by implementing SelectMany.

The first parameter to SelectMany is a function which takes a value (which sits inside Maybe) and returns a new Maybe. In our example, that would be a function which takes a Person and returns its ReportsTo property.

The second parameter is a function which takes the original value, the value sitting inside Maybe returned by the first parameter and transforms both into a result. In our case that would be a function which takes a Person and returns its Name.

Inside the implementation we have the nested if statements that we had to write when we didn’t use the Maybe type. And this is the crucial idea about monads – they help you hide ugly boilerplate code and let the developer focus on the actual logic.

Again, let me draw a diagram for those of you who prefer visual aids:

Ok, so what’s exactly a Monad?

Monad is any generic type which implements SelectMany (strictly speaking, this is far from a formal definition, but I think it’s sufficient in this context and captures the core idea).

SelectMany is a slightly more general version of an operation which in the functional programming world is referred to as bind.

Monadic types are like wrappers around some values. Binding monads is all about composing them. By wrapping and unwrapping of the values inside monads, we can perform additional operations (such as handling empty results in our case) and hide them away from the user.

Maybe is a classic example of a monad. Another great candidate for monad is C#’s Task<T> type. You can think of it as a type that wraps some value (the one that will be returned when the task completes). By combining tasks you describe that one task should be executed after the other finishes.

Summary

I hope this article helped you understand what monads are about. If you find this interesting, check out the F# programming language where monads are much more common and feel more natural. Check out this excellent resource about F#: https://fsharpforfunandprofit.com/.

It’s also worth mentioning that there exists an interesting C# library which exploits the concepts I described in this article: https://github.com/louthy/csharp-monad. Check it out if you’re interested.

Firebase authentication in Angular2 application with Angularfire2

Implementing authentication in web apps is a tedious and repetitive task. What’s more, it’s very easy to do it wrong and expose security holes in our app. Fortunately, Firebase Authentication comes to rescue offering authentication as a service. It means that you no longer need to implement storage and verification of credentials, email verification, password recovery, etc. In this post I’ll explain how to add email/password authentication to an Angular2 application.

Site note: Firebase Authentication can be very useful when building a serverless application.

For reference, here is a working example illustrating this article: https://github.com/miloszpp/angularfire-sdk-auth-sample.

 

Overview

Firebase offers two ways of implementing authentication:

  • FirebaseUI Auth – a library of ready-to-use GUI components (such as login/registration forms, password recovery, etc.)
  • Firebase Authentication SDK – a more flexible approach in which we need to implement above components ourselves; the role of Firebase is to store and verify user credentials; let’s focus on this one

We’ll implement three components:

  • Register component will show a registration form and will ask Firebase to create an entry for a user upon submission
  • Login component will show a login form and will ask Firebase to verify provided credentials upon submission
  • Home component will show the currently logged user (provided there is one)

We’ll use the excellent Angularfire2 library. It provides an Angular-friendly abstraction layer over Firebase. Additionally, it exposes authentication state as an observable, making it very easy for other components to subscribe to events such as login and logout.

Preparations

To begin with, let’s install Angularfire2 and Firebase modules:

Next, we need to enable email/password authentication method in the Firebase console.

Firebase: enabling email/password authentication

Finally, let’s load Angularfire2 in our app.module.ts:

Login component

Firstly, let’s inject AngularFire into the component:

As you can see, this.af.auth  is an observable. It fires whenever an event related to authentication occurs. In our case, it’s either logging in or logging out. FirebaseAuthState  stores information about currently logged user.

Next, let’s add two methods for logging in and logging out:

As you can see, we simply propagate calls to the Angularfire2 API. When logging in, we need to provide email and password (encapsulated in model).

Finally, we need some HTML to display the form:

The form is only visible when the user is not logged in ( authState  will be undefined). Otherwise, we show the user name and the logout button.

Register component

We’ve allowed our users to logged in but so far there are no registered users! Let’s fix that and create a registration component.

Firstly, we need to inject the AngularFire service just like we did in the login controller.

Next, let’s create a method to be called when the user provides his registration details:

Finally, here goes the HTML form:

Summary

In this tutorial I showed you how to take advantage of Firebase Authentication and use it in an Angular 2 application. This example doesn’t exploit the full potential of Firebase Authentication – it can also do email verification (with actual email sending and customizable email templates), password recovery and logging in with social accounts (Facebook, Twitter, etc.). I will touch on these topics in the following articles.

Let me know if you have any feedback regarding this post – feel free to post a comment!

C# in Depth: book notes

pobraneI just finished reading this must-read position for C# developers. I believe that it’s very easy to learn a programming
language to an extent that is sufficient for creating software. Because of that, one can easily lose motivation to dig deeper and gain better understanding of the language. C# in Depth is a proof of why one shouldn’t stop at this point. There is a lot to learn by looking at the details of a language, how it evolved and how some of it’s features are implemented.

I think the book is fantastic. I loved the author’s writing style which is very precise (very little hand waving) but not boring at the same time. It feels that he’s giving you just the right amount of detail.

Here are a couple of interesting things I learned about when reading the book. The list is by no means complete but it gives a taste of what’s in the book.

  • I learned that it’s possible to support LINQ query expressions for your own types very easily. The mechanism is convention-based – there is no specific interface to implement. Your type must have methods that match some specific signatures. This didn’t sound well to me in the first place, but if you think about it, it allows for greater flexibility. For example, with such approach you can add query expression support to existing types (which you don’t have control over) with extension methods.
  • I finally understood why the keywords used to indicate variance in generic types are called out and in. Generic type parameter can be covariant if it’s used for values that are coming out of an API (something’s coming out so you can only increase the restriction on it when deriving). Conversely, when value is an input of an API it’s type can be contravariant (something’s coming in, so you can relax the restrictions when deriving). This explanation plays well with my intuition of how collections can be covariant as long as they are immutable (i.e. there are no inputs to the API)
  • I understood how dynamic typing is implemented in C# and how to create your own types which can react dynamically (with IDynamicMetaObjectProvider, DynamicObject and ExpandObject). The chapter explaining what code is generated when making dynamic calls is the most complex (and most interesting) piece of the book.
  • I understood what code is generated when using the async/await feature and what are the consequences. For example, the code inside an async method does not execute until you await it. Therefore, argument validation wouldn’t give immediate feedback to the caller unless the method is awaited at the point of calling. The same applies to iterators.
  • I learned that something as simple as a foreach loop is actually doing a lot of work under the hood – it creates a try/catch/finally block and disposes of the enumerator if it happens to implement Disposable.
  • I embraced the complexity of type inference of lambda expression parameters and return types.

To sum up, I totally recommend reading this book. It’s not a small time investment, but I think it’s totally worth it.

Building serverless web application with Angular 2, Webtask and Firebase

Recently I’ve been playing a lot with my pet project Tradux. It is a simple trading platform simulator which I built as an exercise in Redux, event sourcing and serverless architectures. I’ve decided to share some of the knowledge I learned in the form of this tutorial.

We’re going to build (guess what?) a TODO app. The client (in Angular 2) will be calling a Webtask whenever an event occurs (task created or task marked as done). The Webtask will update the data in the Firebase Database which will be then synchronized to the client.

Webtask is function-as-a-service offering which allows you to run pieces of code on demand, without having to worry about infrastructure, servers, etc. – i.e. serverless.

Architecture

The full source code is available on Github.

UPDATE: recently I gave a talk on this topic during #11 AngularJS Warsaw meetup. During the talk I built a silghtly different demo application which additionally performs spell checking in the webtask. Check out the Github repo for the source code.

Project skeleton

Let’s start with a very simple client in Angular 2. We will use Angular CLI to scaffold most of the code.

It takes a while for this command to run and it will install much more stuff than we need, but it’s still the quickest and most convenient way to go.

Let’s create a single component.

Now, let’s create the following directory structure. We’d like to share some code between the client and the webtask so we will put it in common directory.

Let’s start with defining the following interfaces inside model.ts. The first one is a command that will be sent from the client to the webtask. The second one is the entity representing an item on the list that will be stored in the database.

Finally, remember to add the Tasks component to app.component.html :

Adding Firebase to the Client

Before we proceed, you need to create a Firebase account. Firebase is a cloud platform which provides useful services for developing and deploying web and mobile applications. We will focus on one particular aspect of Firebase – the Realtime Database. The Realtime Database is a No-SQL storage mechanism which supports automatic synchronization of clients. In other words, when one of the clients modifies a record in the database, all other clients will see the changes (almost in real-time).

Once you created the account, let’s modify the database access rules. By default, the database only allows authenticated users. We will change it to allow anonymous reads. You can find the Rules tab once you click on the Database menu item.

Firebase provides a generous limit in the free Spark subscription. Create an account and define a new application. Once you are done, put the following definition in config.ts :

If you cannot find your settings, here is a helper for you. If you are really lazy, you can use the following settings, although I cannot guarantee any availability.

firebase2

Let’s now add Firebase to our client. There is an excellent library called AngularFire2 which we are going to use. Run the following commands:

Modify the imports section of AppModule  inside app.module.ts  so that it looks like this (you can import AngularFireModule  from angularfire2  module):

Now you can inject AngularFire object to Tasks component ( tasks.compontent.ts ):

You will also need some HTML to display tasks. I will include the form for adding tasks as well ( tasks.component.html ):

Our client is ready to display tasks, however there are no tasks in the database yet. Note how we can bind directly to  FirebaseListObservable – Firebase will take care of all the updates for us.

Creating the Webtask

Now we need to create the Webtask responsible for adding tasks to the list. Before we continue, please create an account on webtask.io. Again, you can use it for free for the purposes of this tutorial. The website will ask you to run the following commands:

Creating Webtasks is amazingly easy. You just need to define a function which takes a HTTP context and a callback to execute when the job is done. Paste the following code inside webtasks/add-task.ts :

The above snippet parses the request body (note how it uses the same AddTaskCommand  interface that the client). Later, it creates a Task  object and calls Firebase via the REST API to add the object to the collection. You could use the Firebase Javascript client instead of calling the REST API directly, however I couldn’t get it working in the Webtask environment.

Obviously in a production app you would perform validation here.

Note that you need to define the firebaseSecret  constant. You can find the private API key here:

zrzut-ekranu-2016-12-27-o-18-24-37

Firebase complains that this is a legacy method but it’s simply the quickest way to do that.

Why do we need to pass the secret now? That’s because we defined a database access rule which says that anonymous writes are not permitted. Using the secret key allows us to bypass the rule. Obviously, in a production app you would use some proper authentication.

We are ready to deploy the Webtask. A Webtask has to be a single JavaScript file. Ours is TypeScript and it depends on many other modules. Fortunately, Webtask.io provides a bundler which can do the hard work for us. Install it with the following command:

Now we can compile the TypeScript code to JavaScript, then run the bundler to create a single file and then deploy it using the Webtask CLI:

Voila, the Webtask is now in the wild. The CLI will tell you its URL. Copy it and paste inside config.ts:

Calling the Webtask from the Client

There is just one missing part – we need to call the Webtask from the client. Go to the Tasks component and add the below method:

This function is already linked to in HTML. Now, run the following command in console and enjoy the result!

Summary

In this short tutorial I showed how to quickly build a serverless web application using Webtasks. Honestly, you achieve the same result without the Webtasks and by talking directly to Firebase from the Client. However, having this additional layer allows you to perform complex validation or calculations. See Tradux for a nice example of a complex Webtask.

You can very easily use Firebase to deploy your app.

ngPoland – Angular Conference 2016

Today I attendend ngPoland – the first international conference devoted to Angular in Central Europe. I’ve had some really good time there and decided to share some of the amazing things I learned about.

First of all, I was surprised to learn how good some of the speakers were at catching people attention and making sure that everyone stays awake. The conference was pretty intense (I counted 15 talks) so it was quite a challange. It was inspiring to see how got can one become at public speaking and working with large audiences.

Photo by Phil Nash from Twilio
Photo by Phil Nash from Twilio

The key takeaway for me is to deffinietly look into Redux (the presentation by Nir Kaufman). The framework introduces a great idea from functional programming to the frontend world. Redux allows you to express your application’s logic as a set of reducer functions which are applied on the global, immutable state object in order to produce the “next version” of state. Thanks to that, it’s much easier to control and predict state transitions. Similiarity to the State Monad seem obvious.

Another very interesting point was the presentation by Acaisoft’s founder who showed a live demo of an on-line quiz app with real-time statistics. The application was implemented in Angular 2 with serverless architecture (AWS Internet of Things on the backend side), event-sourcing and WebSockets. It was exciting to observe a live chart presenting aggregated answers of 250 conference participants who connected with their mobiles.

Definietly the most spectacular talk was the one about using Angular to control hardware connected to a Raspberry Pi device (by Uri Shaked)! The guy built a physical Simon game that was controlled by an Angular app. Thanks to angular-iot he was able to bind LED lights to his model class. The idea sounds crazy but it’s a really convincing demonstration that Angular can be useful outside of the browser. If you are interested, you can read more here.

Last but not the last, I have to mention the workshop about TypeScript 2 (again by Uri) which I attended the day before. Although I knew TypeScript before, it was interesting to learn about the new features such as null strictness and async/await. Coming from a C# background, it’s very easy to spot the Microsoft touch in TypeScript. I believe the language is evolving into the right direction and I’m happy to see more and more ideas from functional programming being incorporated in other areas.

Wrapping up, I think the conference was very convincing at demonstrating how much stuff is happening around frontend development. I like the general direction towords each its evolving and I hope that I will have many opportunities to work with all the new stuff.

 

“Scalability Rules: 50 Principles for Scaling Web Sites” review

Recently I decided to get into the habit of reading IT books regularly. To start with, I wanted read something about building scalable architectures. I did a quick research on Amazon and chose Scalability Rules: 50 Principles for Scaling Web Sites by Martin L. Abbott, Michael T. Fisher. Based on comments and reviews, it was supposed to be more on the technical side. I was slightly disappointed in this aspect. However, I think this is still a worthy read.

The book is divided into 13 chapters. Each of the chapters contains several rules. What stroke me is that these rules are very diverse. We’ve got some very, very general advice that could be applied to any kind of software development (e.g. Don’t overengineer, Learn aggressively, Be competent). We’ve got stuff for CTOs or IT directors in large corporations (e.g. Have at least 3 data centers, Don’t rely on QA to find mistakes). There are also some specific, technical rules – what I was after in the first place. I’m not convinced mixing these very different kinds of knowledge makes sense since they are probably targeted to different audiences (which is even acknowledge by the authors in the first chapter).

Some of the rules felt like formalized common sense, backed with some war stories from the authors’ experience (e.g. AFK Cube). However, some of the stuff was indeed new to me. It was also interesting to see the bigger picture and the business side of things (potential business impact of failures, emphasis on the costs of different solutions, etc.).

I think the book is a great choice if you are a CTO of a SaaS startup or a freshly promoted Architect without prior experience of building scalable apps (having the experience would probably teach you much then the book). If you are a Developer who wants to get some very specific, technical advice then the book will serve well as an overview of topics that you should learn more deeply for other sources (such as database replication, caching, load balancing, alternative storage systems). Nevertheless, I think the book is a worthy read that will broaden your perspective.

Slick vs Anorm – choosing a DB framework for your Scala application

Scala doesn’t offer many DB access libraries. Slick and Anorm seem to be the most popular – both being available in the Play framework. Despite both serving the same purpose, they present completely different approaches. In this post I’d like to present some arguments that might help when choosing between these two.

What is Slick?

Slick is a Functional Relational Mapper. You might be familiar with Object Relational Mappers such as Hibernate. Slick embraces Scala’s functional elements and offers an alternative. Slick authors claim that the gap between relational data and functional programming is much smaller than between object-oriented programming.

Slick allows you to write type safe, SQL-like queries in Scala which are translated into SQL. You define mappings which translate query results into your domain classes (and the other way for INSERT  and UPDATE ). Writing plain SQL is also allowed.

What is Anorm?

Anorm is a thin layer providing database access. It is in a way similar to Spring’s JDBC templates. In Anorm you write queries in plain SQL. You can define your own row parsers which translate query result into your domain classes. Anorm provides a set of handy macros for generating parsers. Additionally, it offers protection against SQL injection with prepared statements.

Anorm authors claim that SQL is the best DSL for accessing relational database and introducing another one is a mistake.

Blocking/non-blocking

As mentioned, Slick API is non-blocking. Slick queries return instances of DBIO  monad which can be later transformed into Future . There are many benefits of a non-blocking API such as improved resilience under load. However, you will not notice these benefits unless your web applications is handling thousands of concurrent connections.

Anorm, as a really thin layer, does not offer a non-blocking API.

Expressibility

Slick’s DSL is very expressive but it will always be less than plain SQL. Anorm’s authors seem to have a point that re-inventing SQL is not easy. Some non-trivial queries are difficult to express and at times you will miss SQL. Obviously, you can always use the plain SQL API in Slick but what’s the point of query type safety if not all of your queries are under control?

Anorm is as expressive as plain SQL. However, passing more exotic query parameters (such as arrays or UUID s) might require spending some time on reading the docs.

Query composability

One of huge strengths of Slick is query composability. Suppose you had two very similar queries:

In Slick, it’s very easy to abstract the common part into a query.

In Anorm, all you can do is textual composition which can get really messy.

Inserts and updates

In Slick you can define two-way mappings between your types and SQL. Therefore, INSERT s are as simply as:

In Anorm you need to write your INSERT s and UPDATE s by hand which is usually a tedious and error-prone task.

Code changes and refactoring

Another important feature of Slick is query type safety. It’s amazing when performing changes to your data model. Compiler will always make sure that you won’t miss any query.

In Anorm nothing will help you detect typos or missing fields in your SQL which will usually make you want to write unit tests for your data access layer.

Conclusion

Slick seems to be a great library packed with very useful features. Additionally, it will most likely save your ass if you need to perform many changes to your data model. However, my point is that it comes at a cost – writing Slick queries is not trivial and the learning curve is quite steep. And you risk that the query you have in mind is not expressible in Slick.

An interesting alternative is to use Slick’s plain SQL API – it gives you some of the benefits (e.g. non-blocking API) but without sacrificing expressability.

As always, it’s a matter of choosing the right tool for purpose. I hope this article will help you to weigh in all arguments.

SBT: how to build and deploy a simple SBT plugin?

Few weeks ago when I was working on my pet project, I wanted to make it an SBT plugin. Since I had to spend some time studying SBT docs, I decided to write a short tutorial explaining how to write and deploy a SBT plugin.

Make sure your project can be built with SBT

First of all, your project needs to be buildable with SBT. This can be achieved simply – any project that follows the specific structure can be built with SBT. additionally, we are going to need a build.sbt  file with the following contents at the top-level:

Note that we are using Scala version 2.10 despite that at the time of writing 2.11 is available. That’s because SBT 0.13 is build against Scala 2.10. You need to make sure that you are using matching versions, otherwise you might get compile errors.

Implement the SBT plugin

Our example plugin is going to add a new command to SBT. Firstly, let’s add the following imports:

Next, we need to extend the AutoPlugin  class. Inside that class we need to create a nested object called autoImport. All SBT keys defined inside this object will be automatically imported into the project using this plugin. In our example we are defining a key for an input task – which is a way to define an SBT command that can accept command line arguments.

Now we need to add an implementation for this task:

And that’s it.

Test the SBT plugin locally

SBT lets us test our plugins locally very easily. Run the following command:

Now we need an example project that will use our plugin. Let’s create an empty project with the following directory structure:

Inside plugins.sbt , let’s put the following code:

Note that this information needs to match organization , name  and version  defined in your plugin. Next, add the following lines to build.sbt:

Make sure that you use the fully qualified name of the plugin object. You can use Scala version older than 2.10 in the consumer project.

Now you can test your plugin. Run the following command:

Note the use of quotes – you are passing the whole command, along with its parameters to SBT.

Make it available to others

If you would like to make your plugin available to other users, you can use OSS Repository Hosting. They are hosting a public Maven repository for open source projects. Packages in this repository are automatically available to SBT users, without further configuration.

The whole procedure is well described here. One of the caveats for me was to change the organization  property to  com.github.miloszpp (I host my project on GitHub). You can’t just use any string here because you need to own the domain – otherwise, you can use the GitHub prefix.

Scala-ts: Scala to TypeScript code generator

I have started using TypeScript a few weeks ago at work. It turns out to be a great language which lets you avoid many problems caused by JavaScript’s dynamic typing, facilitates code readibility and code refactoring and does that at relatively small cost thanks to modern, concise syntax.

Currently we are using TypeScript for writing the frontend part of a web application which communicates with backend in Scala. The backend part exposes a REST API. One of the drawbacks of such desing is the need for writing Data Transfer Objects definitions for both backend and frontend and making sure that they match each other (in terms of JSON serialization).

In other words, you need to define the types of objects being transferred between backend and frontend in both Scala and TypeScript.

Since this is a rather tedious job, I came up with an idea to write a simple code generation tool that can produce TypeScript class definitions based on Scala case classes.

I’ve put the project on Github. It’s also available via SBT and Maven.

Here is the link to the project: https://github.com/miloszpp/scala-ts