JPragma Blog

Pragmatic Developers, Elegant Software

Toughts on code reviews and dev process

with one comment

Nowadays most software projects set the master branch to be read-only, use pull requests to merge feature branches and require code reviews with multiple approvals. Undoubtedly this a good practice in theory, however, as usual, the devil is in the detail, and quite often typical code reviews are useless and even harmful.

  • Most standard code review tools display all changes in a flat list, so the context of these changes is completely lost. The order, in which files appear has nothing to do with the logical organization of the change (and the codebase in general).
  • It is extremely hard to reason about changes larger than just a few files with more than just 10-20 added/modified lines of code in each – especially when these changes mix design concepts and low-level implementations of particular functions. It requires a very deep context switch and extreme discipline from the reviewer.
  • As a result, many reviews focus only on very simple things. Things that could and should be discovered by code analysis tools, IDE inspections, SonarQube and other automatic quality gates. Reviewer’s time is expensive, code analysis tools are cheap.
  • When PR is submitted for the review, the code is already written. It is often too late to comment on the fundamental problems in the design. Such comments are hard to address and naturally, they are met with lots of resistance and even anger from the code author.
  • Often there is a tight deadline to deliver the feature, so the pressure goes on the reviewer – “It is his fault we didn’t deliver the fix on time, he didn’t timely approve my code change”.
  • Typically author of the PR selects the reviewers. This leads to uneven distribution of the reviews (which consume significant time) or sometimes it promotes selecting reviewers that “go easy on everyone”.

What’s the solution?

I propose using the following process:

  1. Developer starts working on the assigned feature/ticket, e.g. TICKET-123.
  2. Create and check out feature branch – feature/t123/my-feature.
  3. Analyze the feature and if it is complex enough write a design document. Depending on the complexity, this could be just a few sentences in the MD file or full wiki page with diagrams, etc.
  4. Create another branch feature/t123/my-feature-design, commit there your design document (or link to it) and submits the PR with overall feature/t123/my-feature as a target. Some projects might require that only tech leads or architects could review such “design” PRs.
  5. Once “Design PR” is approved, merge it and creates another branch feature/t123/my-feature-blueprint. The purpose of this branch is to create the overall skeleton of the feature, adding lots of placeholders for every implementation detail. The more placeholders you add the better it is. Each such placeholder becomes a subtask of the feature. These placeholders might use some special markers like // PH-1 User input validation, and your IDE can be easily configured to display the consolidated list of such markers. After this is done, submit the PR to merge feature/t123/my-feature-blueprint into feature/t123/my-feature. Here your reviewers will have an opportunity to evaluate the overall feature design at the code level and express their concerns or suggestions.
  6. Next, implement every placeholder defined in the previous step. Each placeholder is implemented in a separate branch that is forked from feature/t123/my-feature and dedicated PR is submitted for each. Note that at this stage “placeholder” branches could be implemented in parallel by several developers. Ideally, reviewers of these placeholder implementation PRs should be assigned randomly, promoting a balanced workload as well as knowledge sharing within the development team.
  7. Once all placeholders are resolved, submit the final PR merging the overall feature/t123/my-feature into master. Since every change in this PR is already reviewed and approved, the goal of this PR is to simply verify the commit history of the feature/t123/my-feature branch and make sure no direct code commits are done.
  8. Finally, merge the feature branch into the master.

This process has many advantages. First of all, reviewers have a chance to comment on feature design before too many resources are spent on implementation. Secondly, instead of one huge PR, we are dealing with several small PRs each one staying at a single level of abstraction and never mixing the overall design with small details of a particular step. Each such PR is easy to understand and provide meaningful comments. It promotes knowledge sharing in the team and minimizes frustration and conflicts when some big PRs are rejected.

As usual comments, opinions, and criticism are more than welcome!

Written by isaaclevin

November 13, 2019 at 1:40 pm

Posted in General, Productivity

Identifying technical debt

leave a comment »

Today’s very short post is about ways to identify technical debt and how to find components/classes that require immediate attention.

  1. Files that have a large number of git commits (they are changed often). These files are dependency magnets, they most likely violate “Open-close principle” and “Single responsibility principle”. Create a report by running:
    # list repo files with their total number of commits
    git log --name-only --pretty=format: | sort | uniq -c | sort -nr > git_stats.txt
    
  2. SonarQube provides pretty good reports showing the complexity and test coverage of your classes

Written by isaaclevin

November 8, 2019 at 7:56 pm

Posted in Java

Properties of good tests

with 2 comments

I recently read a very interesting article called “Test Desiderata” published by Kent Beck. In this article, Kent describes the properties of good tests. He also mentions that tests of different levels (unit, integration, acceptance, etc.) should focus on different properties.

Here I’d like to discuss some of these properties and try to find some practical steps to improve our tests in regard to these properties.

Kent Beck says:

“Programmer (aka “unit” tests). Give up tests being predictive and inspiring for being writable, fast, and specific.”

Writable — tests should be cheap to write relative to the cost of the code being tested.

So, unit tests are our first line of defense. They should be extremely cheap to write, they should be easy to read and they must be very fast to execute.

Tools that help to increase the development speed of such tests are Spock Framework and Groovy in general. Even if the production code is in Java, I still prefer writing my unit tests in Groovy. “Optional typing”, “Multiline strings“, “Safe Navigation“, “Native syntax for data structures” and especially “Initializing beans with named parameters” are huge productivity boosters. Check out my presentation deck for a very basic introduction to Groovy. Spock takes productivity even further allowing you to write very concise and expressive tests. My favorite feature is Spock’s own mock/stub framework. It is way more readable than more commonly used Mockito. Enabling Groovy and Spock in Java projects is simply a matter of adding an appropriate Gradle or Maven dependency.

Here is somewhat controversial thought… If our tests are very cheap to write, maybe we should treat them as immutable. I recently saw a tweet suggesting that we should be allowed to write new unit tests or delete existing ones, never modify them. This might sound a bit extreme, but there is a rationale in it. Quite often our tests look good initially, but their quality and readability degrade with time. We refactor production code, which leads to tests failures, we realize that the test is outdated and we start quickly “patching” it.

Kent Beck says:

Inspiring — passing the tests should inspire confidence.

Structure-insensitive — tests should not change their result if the structure of the code changes.

Predictive — if the tests all pass, then the code under test should be suitable for production.

These properties are very important for integration/end-to-end tests. I discussed this in my previous post. When writing “stable” e2e tests, I use real components/collaborators for the entire system and mock only “edge” dependencies such as DAO Repositories, JMS abstraction interfaces, REST clients, etc. I prefer creating dedicated mock implementations of such “edge” components rather than using raw Mockito, JMock, Spock. This helps to make such tests readable for non-programmers. Typical e2e test starts specially configured application context, invokes input component and validates expected behavior at all “edges”. The value of such tests is enormous, they give us the confidence that the entire system is working properly. Being structure-insensitive, e2e tests serve us as safety-net when doing code refactoring.

In one of the future posts, I’ll cover my experience working with legacy code and how to efficiently create e2e tests there.

As usual comments, opinions, and criticism are more than welcome!

 

 

Written by isaaclevin

November 3, 2019 at 10:54 am

Posted in Java, Testing

Don’t over spec your tests

with 3 comments

Yesterday I had a discussion with my colleagues about the properties of good tests. I think in general tests have 4 purposes in the following increasing order of importance:
  1. Validate correctness of the system under test
  2. Document usage and expectations of the tested module
  3. Help designing component’s API and interactions (when practicing TDD)
  4. Provide a safety net that enables fearless refactoring
The last point is the most important in my opinion. To provide such safety net, tests must be, as stated by Kent Beck, “sensitive to changes in system behavior, but insensitive to changes in code structure”.
How to achieve this?
Perhaps we should prefer higher-level component/module tests. Such tests are quite more stable and insensitive to structural changes. We should limit the usage of mocks in such tests, probably only mocking collaborators that live outside of the component boundaries.
We should only verify interactions with collaborators that are essential to the business logic of our component.
What do I mean by that? Often I see unit tests where developers stub responses of component collaborators and then verify ALL these interactions. With Mockito they sometimes utilize “lenient” matchers like any() or isA() while stubbing; and “strict” matches like eq() while verifying. This technique is OK, but in my opinion, it should only be applied to true mocks – calls that are essential to the behavior of the system.
Calls to simple data providers (stubs) shouldn’t be verified at all, otherwise, it delivers wrong intentions of the code author as well as makes tests quite fragile.
The difference between stubs and mocks is greatly explained in this article by Martin Fowler.
What do you think? How do you make your tests insensitive to structural changes?

Written by isaaclevin

October 19, 2019 at 11:10 am

Posted in Java, Testing

Project release and version management using gradle and git

leave a comment »

Two very useful Gradle plugins:

1. axion-release-plugin
https://github.com/allegro/axion-release-plugin

This plugin derives project version from tags in git repository. I use following rules in my projects:

  • Start with default version 0.1.0-SNAPSHOT
  • Every release of master branch increments second digit
  • Production emergency fixes are done on a branch, which is created from the version tag. Such branch name must start with “hotfix-” prefix. Releasing this branch will increment 3rd digit.
  • Feature branches must have “topic-” prefix. They are never released, however their version includes feature name.

To check current project version run “gradle currentVersion”
To release the project run “gradle release”

2. gradle-nexus-plugin
https://github.com/bmuschko/gradle-nexus-plugin

This plugin adds ability to upload project artifacts to nexus repository. Simply run “gradle upload”.

Example gradle.build

buildscript {
    repositories {
        maven {url = 'http://repo1.maven.org/maven2'}
    }
}

plugins {
    id 'java'
    id 'pl.allegro.tech.build.axion-release' version '1.3.4'
    id 'com.bmuschko.nexus' version '2.3.1'
}

scmVersion {
    versionIncrementer 'incrementMinor'
    branchVersionIncrementer = [
            'hotfix-.*' : { c -> c.currentVersion.incrementPatchVersion() }
    ]
    branchVersionCreator = [
            'topic-.*' : 'versionWithBranch'
    ]
}

group 'com.jpragma.myproject'
version = scmVersion.version

repositories {
    jcenter {url = 'http://jcenter.bintray.com/'}
}

dependencies {
    compile 'org.slf4j:slf4j-api:1.7.21'
    testCompile 'junit:junit:4.12'
}

nexus {
    sign = false
    repositoryUrl = 'http://localhost:8081/nexus/content/repositories/internal/'
    snapshotRepositoryUrl = 'http://localhost:8081/nexus/content/repositories/internal-snapshots/'
}

Written by isaaclevin

July 12, 2016 at 2:03 pm

Posted in Uncategorized

Useful aliases and ENVs in cygwin .profile

leave a comment »

alias cp=’cp -i’
alias mv=’mv -i’
alias df=’df -h’
alias du=’du -h’
alias grep=’grep –color’
alias itest=’mvn clean test verify -U’
alias ls=’ls -h –color’
alias mci=’mvn clean install -U’
alias mi=’mvn install’
alias mjr=’mvn jetty:run -o’
alias mjrwithprofile=’mvn clean jetty:run -DAPP_ENV=dev -Dspring.profiles.active=dev’
alias ps=’ps -W -a -f ux’
alias rm=’rm -i’

function xtitle {
echo -ne “\033]0;$1\007″
}

export MAVEN_OPTS_BASE=”-server -Xms128m -Xmx2048m -XX:MaxPermSize=256m”
export MAVEN_OPTS_DEBUG=”$MAVEN_OPTS_BASE -Xdebug=true -Xnoagent -Djava.compiler=NONE -Xrunjdwp:transport=dt_socket,address=5555,server=y,suspend=n -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.port=7777″
alias mvnAllOff=’export MAVEN_OPTS=$MAVEN_OPTS_BASE’
alias mvnDebugOn=’export MAVEN_OPTS=$MAVEN_OPTS_DEBUG’

mvnDebugOn

Written by isaaclevin

May 11, 2016 at 8:45 am

Posted in Uncategorized

Useful options running “mvn test”

with one comment

There are some very useful command line options you can specify when executing maven tests:

-Dtest=[testname], this allows selective execution of unit tests, instead of running them all. [testname] might be full class name, class name without package, or even wildcard.

-Dsurefire.useFile=false, this will force surefire plugin to write failing test info to stdout instead of file in target/surefire-reports/. Can be a huge time saver.

Written by isaaclevin

January 28, 2011 at 12:58 pm

Posted in Java