Agile Retrospectives

14 Nov



A few weeks ago, our team participated in RailsRumble, a 48-hour-non-stop programming competition. We named our app KaigiJapanese for “business meeting”.

Kaigi helps remote teams running Agile retrospectives. It is completely free and awesome.

Combining Kaigi with a video conference tool will simply do the job. Give it a try!

This is our suggested guide for an efficient Retrospective meeting:

Prepare the agenda taking into account the available time. In this case 50 min. There should always be a limited timespan for a meeting.

  • 5 min – Introduction / go over the plan from previous meeting.
  • 10 min – Data collection:
    • Each member should write down their feelings using simple keywords on stickies.
    • Use the Start-Stop-Continue technique.
    • Consider: technical stuff, planning, estimates, design, communication, demos, deploys, etc.
  • 3 min – Grouping:
    • Group together related topics – this will allow to learn from each other’s comments.
  • 2 min – Voting:
    • Each member has 5 votes.
    • Each member will vote for the topics they‘d like to discuss.
    • This voting will determine the priority for each topic.
  • 25 min – Go over the topics:
    • Discuss each item.
    • The number of topics that will be discussed depends on the available time.
    • For each topic, the moderator will take some notes to discuss in the next phase.
  • 5 min – Action plan:
    • Create actionable items.
    • Every item should have a due date.
    • Define the responsible for each individual item. Some items could be for the entire team.
    • The next meeting should start by analyzing whether the action items were completed or not.

Each topic that wasn’t discussed during this meeting should be included in a backlog to be reviewed within the next session.

What’s new on RSpec

6 Aug

On June 2, 2014, RSpec 3 was released after months of announced features that were becoming available through new versions of the RSpec 2 branch.

Here at Moove-IT, we decided to take a look at the new stuff, as well as say goodbye to some of the old things that have been removed. Just in case some of us who use RSpec haven’t paid attention to them lately, we put a little bit of Better Specs on top of all that.

Here is the presentation we used at our latest Friday Talk.

When and why Clojure

28 Jul


When you have to solve a problem, in order to make an efficient and effective solution you must choose the right tools to help you in the process of building it. As programmers, the languages we choose determine a big part of the solution and they will influence the way we reason about it, so it’s really helpful to understand what are the kind of problems that a language aims to solve before starting to use it. Following that line, this post tries to summarize the key features of Clojure that make it preferable over other languages under some conditions, based on what Clojure designers and its community predicate about the language.

Immutable data structures

The Clojure libraries provide several data structures along with a handful of functions to manipulate them. The core ones are list, vector, map and set and they are all treated using the same abstraction: collection. Out of the box Clojure provides functions that operate on any collection, like = for testing equality, count to know the number of items in the collection, conj to add an element, empty to create a new empty collection and seq to obtain a sequence out of a collection, which is a sequential view of the latter. To work with sequences the core library offers the obvious first and rest but also the more powerful map, reduce, filter, take and drop, that is just to mention the fundamentals but you can find a lot more in the clojuredocs. It’s important to note that all of these data structures are immutable. For instance, conj adds an element to a collection without mutating it, the trick is just to return a new collection, one that contains exactly the same elements as the original plus the new one. There’s an unavoidable penalty cost of allocating new data for these operations, but it’s very optimized since the structures are persistent and Clojure is hosted in the JVM, which is known for a trustworthy garbage collector. Also, we’re trading that performance cost for correctness guarantees and other performance optimizations using parallel computing, as we’ll see next.

Functional programming

Clojure is mainly a functional programming language, meaning that it drives you to think in the mathematical sense of functions: a relation between a set of inputs and outputs. Given an input, a function always yields the same value, no matter how many times you call it, and most important, it doesn’t modify anything externally visible (AKA side-effects); let’s look at a trivial example.
This is a common way to transform a list in an imperative manner:

And this is how you would do the same in a functional (lisp) fashion:

You could argue about the succinctness of the syntax and you could even say that the first is clearer to read, but it is a fact that, if you want certain correctness guarantees, at least two implications arise from the imperative choice.
First: you are bound to sequentially and synchronously iterate the list. Why? Because in imperative programming, a procedure (transform in our example) may cause alterations visible to other procedures (yeah, side-effects!). How do you know that the transformation of one element isn’t affected by the transformation of the previous one? How do you know that the overall result is always the same? To convince yourself, change transform for print.
Second: you are forbidden from sharing the list between threads. As you are modifying elements in place, you could be doing so while another thread is trying to access the same element; for those coming from the Java world, this exception may sound familiar.
On the other hand, the limitations of programming with functions turn out to help in making safer constructions. Implementing transform with pure functional programming means that it won’t make any visible modification and will also return a new value instead of modifying the previous one. Now, that means you can asynchronously calculate the transformation of each element and safely share the list between threads, what is more, these optimizations can be automatically done by the compiler.

Identity and state

Here is where actual modifications take place: in order to model real world problems, we sometimes need entities that mutate over time. Clojure’s approach to achieve this, while maintaining immutable data structures and functional constructions, is to have a clearly defined concept of identity separated from the state. Here’s the definition for each concept:

  • Identity: a stable logical entity associated with a series of different values over time.
  • State: the value of an identity at a point in time.

The relation between these two concepts is that an identity has exactly one state at any point in time, quoting an identity can be in different states at different times, but the state itself doesn’t change. Notice how the immutable structures perfectly fit as states. To better visualize these definitions you can think of many examples: an identity can be a counter of page visits, while the states of the counter may be 42 or 1000; another identity could be the list of current users in a chat group with states being (immutable) lists of users. Just as at any point in time the counter may change from having 42 to having 43 and neither 42 or 43 will change as a value, also the list of users may change from the state [‘user_1’, ‘user_2’] to the new state [‘user_2’, ‘user_3’] without any list being modified but creating a new one and updating the content of the identity.
Clojure implements identities with three reference types: Atoms, Agents and Refs. The difference between them is basically their concurrency semantics, i.e. whether their value changes are synchronous or asynchronous and coordinated or uncoordinated.
Atoms are a simple reference whose changes are synchronous and atomic but uncoordinated and independent from other references.
Agents are also simple references but changes are made asynchronously, so the semantic is to send a change to the Agent, and the change will be applied in an unknown time by other thread.
Finally, Refs are the only reference type that implement coordinated modifications. They are backed by Clojure’s implementation of software transactional memory which gives you the power to modify more than one reference within a transaction.


Clojure aims to be a language used in highly concurrent environments where a lot of work can be parallelized, scaling up by adding more processing units instead of increasing the speed of a single unit. The way to achieve this is by making you think in functional transformations and immutable values, with state changes being very controlled and limited.
Although learning a lisp-based language and migrating from other tools can be really hard or expensive, it’s a good exercise to learn this way of reasoning about problems and apply these concepts with more or less difficulty in almost any language.

References and further reading

Understanding Web Components

7 Jul

Quoting Eric Bidelman, an engineer at Google especially involved in this technology:

“Web components are a set of emerging standards that allow developers to expand HTML and its functionality”

And these standards are:

  • Templates
  • Custom Elements
  • Shadow DOM
  • Imports

I will try to give you a general sense of what these are.

Templates – Reusing HTML

Templates shouldn’t be a new concept to any developer out there. A lot of languages, libraries and frameworks use some kind of template engine. Let’s see how to use them:

Here we are defining a new tag, called template and we could put any html markup we want inside it.

But, what’s the new thing here? It’s just a tag…

It is…but the content inside the template will not be rendered in the page until we explicitly activate it. Even better, no external content like images or videos will be loaded, nor javascript code inside a script tag will be executed.

How do we activate a template?


Custom Elements – Creating tags

HTML provides us with a set of tags that we use to define the structure of our web. After using them you will realize that there are not too many and most of the time they don’t describe what you are building. This is how it looks if we inspect the markup of a site like Twitter:


Twitter markup.

It’s simply too hard to know what is going on, it lacks expressivity.

Here is where custom elements fit well. We can create our own tags that the browser will treat as normal tags. And we can name it! It only requires to have a dash (-) in its name, to distinguish from the default tags. Also we can define our own public API with custom attributes.

Let’s define our tag and register it in the DOM:

We can see in Chrome DevTools:


We are able to see the text “Software for human needs” but it’s not displayed in the browser. This is because we put it inside a template and like we said it will not be rendered until we activate it.

And there it is! Our first custom element, using templates.

Shadow DOM – Encapsulation

If you want to build a small widget within your page you will face one big problem: encapsulation. The DOM tree lacks of it.

Your special widget may have CSS classes, ids and javascript code that use this to implement some behavior. You have to pay special attention to this, since it could accidentally apply to other parts of the page.

So Shadow DOM is the component we will use to separate the content, and it’s functionality, from the presentation of our element.

To enable it in DevTools (Chrome 35+), just right click in Chrome, go to inspect element, and in the settings you can activate ‘Show user agent Shadow DOM‘

Now you will be able to see a node called “#shadow-root” inside our moove-it tag, and inside of it the content of the element. This separation will enable us, among other things, to style our element without affecting the rest of the page.

Let’s see an example of how to use this feature. We will create our own custom tag, using templates and encapsulate its style and content, using Shadow DOM.







We can see how the browser sets the paragraph text inside our element with the orange background but doesn’t do the same with the <p> tag outside of it, even though we didn’t declare anything special in our CSS. The end-user will only see a custom tag that will have all the style and content built-in. This is the encapsulation we want for the web!

Imports – Including components

HTML imports provide us a way to include HTML documents inside other HTML. If you are a web developer you know that if you want to include your CSS files you need to do something like this:

In a similar way we are going to import a document:

It is worth mentioning that importing an HTML file doesn’t mean to copy all its content. What we are doing is telling the parser to go and fetch the content so we can use it. The great thing about this, is that we get to decide whether to use the entire document or just a part of it.

To access we use:  var documentContent = document.querySelector('link[rel="import"]').import;

and inside documentContent we have our new document to use it as needed.

This gives us the opportunity to start creating reusable components that we can import when we need them without repeating code, which of course, is a great thing!


If you got really excited about Web Components and want to know more about it please check these talks:

Web Components: A Tectonic Shift for Web Development – Google I/O 2013

Building modern apps with Polymer & Web Components

And these articles too:

A Detailed Introduction To Custom Elements

Custom Elements defining new elements in HTML

And that’s it… I hope you now have a better understanding of what this new technology is capable of doing and start creating your reusable components to share with the community :)

Token based Authentication – Json Web Token

25 Jun

What is it for?

Token based authentication is a new security technique for authenticating a user who attempts to log in to a secure system (e.g. server), using a signed token provided by the server.

The authentication is successful if the system can prove that the tokens belong to a valid user.

It is important to note that Json Web Token (JWT) provides signed tokens but not encrypted ones, so passwords or any critical information must not be included in the token unless you encrypt the data (e.g. using JWE).

Why use it?

Here are some advantages of choosing JWT:

  • Standard: JWT is becoming a standard, and there a multiple libraries for a lot of languages (Ruby, Java, Python, Node, Backbone). So the integration with your language or technology should be pretty easy.
  • Cross-domain / CORS: Since the information is transmitted using an HTTP header, you are able to make AJAX requests to any server or domain.
  • Protection from CSRF: The token must be included in every request made to the server, and will be validated by the server. The token is linked to the user’s current session.
  • Server side Scalability: The token is self-contained(i.e. contains all the user info), so there’s no need to keep a session store. The rest of the state lives in the client’s local storage.
    The token might be generated anywhere, so you are not tied to any specific authentication scheme, decoupling this process from your application.

Warning: since the information is transmitted in an HTTP header and its size is limited, the token size could be an issue.

How does it work?

Let’s see an example of how to use JWT to authenticate a user. In this example we will be using Ruby, Rails and AngularJS.

Server Side

Let’s suppose we have an authentication controller.

When a post to create a session comes to the server, we validate the user and create a new JWT.

Then, for every request we have to validate the token before processing it.

Let’s take a look at the token. When we encode a new token, we get this string:

Then, when we decode the token, we retrieve this:

If we want, we could add some claims to the token:

  • exp : identifies the expiration time on or after the token MUST NOT  be accepted for processing.
  • nbf : identifies the time before the token MUST NOT be accepted for processing.
  • iat : identifies the time when the  JWT was issued.

Client Side

An example of a sign in request to the server from the Angular app.

Once we have the token, we can make any request to the server using the token.


Reference Links


How we do Asynchronous Planning Poker Estimations

21 May

Have you ever worked in teams, where several and simultaneous projects take place and stories need to be estimated quickly and accurately so as to commit to a delivery date?
If you ever encountered this challenge, you would probably find this article very useful.

A different scenario from Sprint Planning

Traditional Scrum Sprint planning definitely works for some of our clients. But, in this particular scenario we are talking about a team working on several projects for a single client. Therefore, we are constantly receiving tickets which need to be estimated. Since it does not make sense to gather the whole team too often in order to estimate, we came up with this idea of an asynchronous planning. We are positive you would find it extremely helpful.

How it works?


  • Once a new task arrives, the team members working for that particular project know they must post their personal estimates on a special board (asynchronously and not anonymously). This board would then contain several post-its with the right estimations facing the wall, so they are not visible for the rest of the team.
  • After a minimum of estimates have been cast, one team member turns around the pieces of paper and gets an average.
  • In cases where the estimates presented are too different (which is rare), and then the average could be just not that useful, the team members gather to discuss the estimates more in details and reach an agreement.

Every two weeks the estimations are analyzed and compared with the actual execution time. Along with this, a retrospective meeting takes place to improve the overall functioning of the team.


  • Super fast.
  • Average improves accuracy.
  • Visibility (every team member sees all estimates).
  • Time saving.
  • Team player philosophy.
  • Improves performance.


  • Fewer group discussions.
  • Only works with highly organized teams.

Taking it to the next level

We started this process using only a whiteboard, a few markers and some post-its. As we are geeks, and love creating software to solve pretty much everything, we decided to take this to the next level and create a specific tool for it. We called it “Zenkai” – playing with the letters of “Kaizen”.


It works by simply automating the above described process.

Pending Estimates

Dashboard with pending estimates.




Tickets per project report.


Hours and Team Velocity over Time.


Zenkai on GitHub - Fork it and tweak it, or submit your Pull Request!

We’ve been following this process (and using this tool) for almost a year now and are constantly trying to improve it. Naturally, this facilitates the estimation process. Beyond that, it truly helps our teams to stay aligned, deliver in time, and keep track of their improvement.

If you want to implement this with your team, please contact us and we’ll be happy to discuss further!

Lean Startup Machine Montevideo

19 May

Lean Startup Machine is taking place in Montevideo this year, on August 1st – 3rd, 2014.

This three-day workshop focuses on teaching innovators and entrepreneurs how to start a business using Lean Startup.

Ariel Ludueña (CEO) and Martin Cabrera (CTO) both at Moove-IT have been invited as mentors to be part of this great Lean Startup educational experience.

For more information:



Is Scala worthwhile?

23 Sep

If you’re a Java developer and start digging into some Java blogs, forums, books, etc. you’ll  surely find some people arguing in favor of Scala, and read a lot of people talking about its functional approach and how cool it is.

Here I won’t attempt to cover Scala’s features in depth, though I will point out why I think it’s worth giving it a try.

Some history

Scala was designed in 2001 and its first version was implemented in 2003 on the Java platform, by Ph D. Martin Odersky. He is a  professor of programming methods at EPFL, implemented Java generics and also the current generation of javac, so I am pretty sure this guy has a lot of Java Knowledge :)

Later, on January 2011, the Scala team won a five year research grant of over €2.3 million and with this funds, in May, Odersky and his collaborators founded Typesafe Inc. In 2012 Typesafe received a $3 million investment from Greylock Partners.

Though you may wonder, is this just a theoretical research from some university? or,  Is it a platform to build enterprise software? Well, after looking at who’s using it, it turns out that many companies are building new services based on Scala, including Siemens, Novell, Sony, LinkedIn, Foursquare and Twitter.

Let’s get into some code

Syntactic sugar

As a newer language it provides some syntactic constructions aiming at the developer efficiency. Some quick examples:

Scala aims at not disrupting object oriented principles, that’s why it has an object construction which is exactly what it says: one instance of a class. Provided that, and bearing in mind that you can name an object the same name as a class, there’s no more static keyword, just call an object by its name.


As you can read everywhere, Scala is an hybrid, it’s both an imperative object oriented language as well as a turing complete functional one. There’s a lot of interesting theory you can read about functional programming written by excellent authors, so here we’ll just see some examples again:

A function taking another function as a parameter. predicate is a function that takes one element of the same type A of the list and returns whether it should be filtered or not.

Here we’re creating another function utilizing the previous one but fixing one of its parameters, this returns a new function, which I called filterList. Note that it could have been declared as a val since in functional languages, functions are so called first-class citizens.

Finally,  we can call the filterList function with a predicate, which given an element of type String (inferred by the List of Strings) computes the standard java.lang.String.startsWith(String).

Pattern matching

Some people introduce pattern matching as a switch statement with some enhancements. In my opinion this is  like an entire new feature since it’s more powerful and less restrictive.

Let’s see an example:

Here we can see a lot of things going on. The first one is that we’re pattern matching on list value, and Scala will compare its value with the left side of each case expression, from top to bottom. In the first case we compare against a literal list, there we ask if it’s a list and if it contains those exact two strings. Then we simply compare to a string literal, or we can ask for the value’s type like the Int case. We can also add a conditional guard like the string with the if s startsWith “A” condition.

Everything is an expression

Another great thing to add is that the whole match we just saw is an expression and it’s returning the value on the right side of the matched case,  meaning that we could write:

And the compiler will infer the type of myListString, in this case being String,  because all right sides are Strings. Though they could be of other types,  and Scala would  take the base class that all of them extend.

In fact, all sentences in Scala are expressions, meaning we can get the result, for example of an if expression:


You can think of traits either as Java interfaces where you can implement methods,  or as abstract classes with the advantage that you can extend as many as you want.

Whatever way you see it, traits are a pretty way of encapsulating behavior and reutilizing code.

Imagine you have a Bird class like this:

Then you could use it like this:

That was easy. Now suppose you need a bird like the Penguin, who can swim but which can’t fly. Then you could make an abstraction like FlyingBird so Pigeon and Hawk would extend it while Penguin could directly extend Bird.

Ok, now there comes the Frigatebird, a bird that can fly but cannot swim. This bird will surely drive you crazy because he can’t swim and, in a language like Java, you can’t create a class hierarchy modeling all these birds, without duplicating code. The problem we are facing here is the well known problem of multiple inheritance and how Java handles it, you can create two abstract classes one for flying and another one for swimming, but you can’t extend both.

Here’s how you would likely solve it if you’re using Scala:

This is how birds would look like:

And now you could execute things like this:

But something like this won’t compile:

In this last case, Scala would complain that fly is not a member of Bird. That’s because it infers that Bird is the only class that Frigatebird and Penguin have in common, assigning Bird as the type parameter of the List.

So, why Scala?

Besides these features we’ve quickly looked at, Scala introduces implicits, structural types, extractors, and many more, taking the best from both worlds (Object Oriented and Functional Programming). Having that said, it could be stated that Scala is concise, flexible and Scalable.

I haven’t mentioned performance, but you could be worried about some functional constructions that create new instances instead of writing values in-place. Well, we know that the garbage collector is usually pretty good with short-lived objects but also, thanks to the concise language, you can spend your time optimizing critical parts instead of dealing with tedious non-performance critical parts. There are a few benchmarks that compare Scala to Java and they confirm that it performs fast.

As I mentioned earlier, Scala is accepted by the industry, so you can be sure it will last long and its community will be supported.

Finally it’s good to know there are plenty of tools to empower Scala, from plugins for almost every IDE and a very cool building tool, to web frameworks and database access libraries.

In the worst case scenario you could end up writing Java-style code in Scala and it would be compiled to almost the same binary code and it’d be run in the same JVM. The question we should ask ourselves is: why don’t we just give Scala a try?


5 Basics To Master Scrum

9 Sep

Scrum Samurai

Agile has been part of our lives for a while now, and most of the time we use it in our own kind of way. Several agile frameworks have been a subject of study by our teams throughout the years and we have learned a great deal from each of them.
As it turns out, in the last couple of months we felt like our agile mixin’ needed some kind of structure in order to really make us grow, so we decided to give Scrum a chance.

But there is one thing about Scrum that didn’t quite make it for us. The software field is full of Scrum Fanatics who are always trying to evangelize you and make you “buy” this framework. The thing is that we normally question everything and like to draw our own conclusions; thus their fanaticism turned out to be kind of counterproductive. Well, some of us signed up for a ScrumMaster Certification to find out for ourselves.

To our surprise, this experience was completely different from what we had expected. We all thought we would be sitting for two days, listening to a guy talk about Scrum and the whole framework (roles, techniques, etc), but the exact opposite happened.
Alan (our trainer) had a much more philosophical way of teaching. He didn’t sell us Scrum; he explained to us how it worked and gave us the tools for us to apply it. He didn’t put thoughts in our heads; he guided us. We didn’t just listen about the framework; we learned about life itself.

I am not just going to talk about Scrum right now, I want to do more than that. Scrum is something you can continue to learn and investigate by yourselves. I want to share with you all the other things that fascinated us, the ones that really got us thinking and realizing, that there is much more to Scrum than just a framework. I will try to be as concrete as possible; it is up to you to make these concepts grow and change your life.


First of all, we have to be intellectually humble. And by humility we mean, the ability to evaluate ourselves correctly. It is really important that we are convinced that we don’t know everything, that we are probably wrong as many times as we are right, and that others have as much chances to be right as we do.
It is most important that we learn to put ourselves in someone else’s shoes. We won’t achieve this until we gain enough humility. Thinking as if we were the team, the client and the product, will lead us to richer and better decisions.

Yes, and…

We could say there are three kinds of people in this world. There is the negative ones; who will always give “No” for an answer, no matter what we say. We need to avoid teaming up with them.
Then there is the frightened ones; they will always find an excuse, a way to avoid challenge, and they are not committed. Their favourite line is “Yes, but…”, and we know they might still be saved if we all try very hard.
Finally there is the positive ones; they not only answer “Yes”, they always manage to answer “Yes, and…”. These ones always look at the bright side of things, find ways to make ideas grow and they are committed 100%. These are the ones we want to have by our side in our teams, in our lives. These are the ones we want to be.

Always Deliver Value

Every customer seeks to get some kind of value in return, and software is not the exception. When someone comes to us asking for a product or service, we not only have to give them what they asked for, we need to make sure it will still be of value to them once we deliver. That is why Scrum tells us to apply the concept of organic growth.
Organic growth defines a process in which we iteratively make our deliverables evolve from the previous one, with one little characteristic; we need to assure that every deliverable is functional and that it adds value to our customer. If it does not, it is no good.
We need to understand that by having functional and frequent deliveries, we gain the ability to detect errors, bugs or enhancements much earlier, making errors cheap.

Continuous Improvement

We won’t grow unless we suffer. The only thing we will achieve by being comfortable is decay. We need to get out of our comfort zone to better ourselves, try to break the status quo.
Alan defined the concept of Pain-Driven Facilitation, in which he stated that the best way to grow is to challenge ourselves with small, tolerable and constant changes. Once they stop being a challenge, we need to introduce new ones, and so on.
There is a Japanese word called “Kaizen” that means “change for the better” which helped us make that concept sunk in.


Some people say that perfection does not exist. I say it does. Whether we are able to achieve it or not is another story. People always try to get what they can’t have, and maybe perfection is one of those things.
That is what makes it so unique and desirable. We will always try to get there, because it is hard, because we know we probably won’t. Our ambition is what makes us try harder, learn more and grow to be successful.
Scrum is very similar. We know we won’t be able to make everything work as Scrum expects, but it challenges us to do so, and we like it. We will always be improving ourselves to get there.

Keep Thinking

I don’t feel there is much need to write a big conclusion. There is only one thing I would like to ask you: keep thinking about these concepts and sleep on them for a while. Then, if you think it is worth it, try to spread them out as much as possible, as our trainer did with us.
Let’s help people be more humble, have a positive perspective, deliver value, change for the better and be ambitious.
Stay tuned, there is more to come!

Photo credit: graphistolage

A story about Gemified Engines

21 Aug

Needless to say, Engines are a core aspect of the Ruby on Rails framework (versions ~>3.1 and ~>4.0, for now). Having a basic understanding of what they are and how they work is what we wanted to address in these slides. This is a short introduction we made for one of our “Friday Talks”, together with “empanadas” for lunch.