EDITING BOARD
RO
EN
×
▼ BROWSE ISSUES ▼
Issue 30

Human software assessments - Interview with Tudor Gîrba

Ovidiu Mățan
Founder @ Today Software Magazine
OTHERS

Tudor is a researcher and software consultant. He has recently become known for winning the Junior category annual award offered by AITO (Association Internationale pour les Technologies Objets), organization which supports research in the object oriented technologies area. He was present on December 13th in Cluj, in the event called Be Fast and Curious organized by 3Pillar Global company.

[Ovidiu] Please describe for the readers of our magazine the capabilities of the Mondrian engine, which received the 2nd prize in ESUG 2006 Innovation Awards and also contributed to the one offered by AITO.

[Tudor] Mondrian is a visualization engine. It was one of the first visualization engines that provided a compact scripting API. For example, visualizing a class hierarchy can be done like:

view nodes: classes.
view edgesToAll: #directSubclasses.
view treeLayout

If we want to see each class as a rectangle whose dimensions is given by code metrics, we can extend that visualization like:

view shape rectangle
     height: #numberOfMethods;
     width: #numberOfAttributes.
view nodes: classes.
view edgesToAll: #directSubclasses.
view treeLayout

All in all, Mondrian opened a new direction by showing how it is possible to express visualizations succinctly.

The software and data analysis platform, Moose, has made, since 2003, under your guidance, the transition from an academic platform to one that can be easily used in the business environment, too. As it is described in The Moose Book, we have a process that is well defined by data acquisition modules, model description and data processing engines and tools. Can you give us some examples of using it in software applications or data analysis?

[Tudor] Moose is an extensive open-source project that was started 17 years ago at the University of Berne, Switzerland. I have been leading the project for 12 years and currently it is supported by several companies and research groups around the world. The goal of the platform is to make crafting of custom analyses easy. This characteristic enables developers to combine generic services and contextualize them to the specifics of the system at hand.

A significant class of use cases comes from "testing" software architecture. For example, in a client-server JEE system, it can be desired to not use @Stateful beans. A check for this rule would look like:

 allTypes noneSatisfy: [ :type | type isAnnotatedWith: 'Stateful' ]

This was a rather simple example of a possible rule. A more complicated rule could specify that all calls from the client to the server to happen only through interfaces that should be implemented by classes annotated with @Remote. A check in this direction would be:

   (((allTypes select: #isUIType) flatCollect: #clientTypes) select: #isServerType)
      allSatisfy: [ :type | 
         type isInterface and: [ type directSubclasses anySatisfy: [:class | class isAnnotatedWith: 'Remote' ]]]

Moose is not at all limited to only these kinds of checks, and can be used for building complete data analysis tools with sophisticated interactions.

For example, the attached picture shows an analysis session formed of four steps: In the first pane we scripted a visualization for the classes in the system that is shown live in the second pane; selecting a class from the visualization led to opening the details of that class in the third pane; and in this third pane we built another visualization that shows to the right how methods use attributes defined in that class.

This is but an example that shows how developers can easily combine and customize analyses in a visual and interactive environment.

More details about Moose can be found at: moosetechnology.org

The topic you will approach in the Be Fast and Curious event is Humane Assessment, which represents a new software engineering method whose purpose is to help in decision making. Can you sketchily describe to us what it is all about? Software engineers spend more than half of their time assessing the state of the system to figure out what to do next. In other words, software assessment is the most costly software engineering activity.

[Tudor] Yet, software assessment is rarely a subject of conversation. As a consequence, engineers end up spending most of this time reading code. However, reading is the least scalable way to approach large systems. For example, if a person that reads fast requires 2 seconds to read one line, it will take that person 1 man-month to read 250’000 lines of code (the size of a medium system). Given that engineers need to understand the system on a daily basis, it follows that decisions are being made on partial and often incorrect information.

But, it does not have to be like that. Humane Assessment is a method that offers a systematic approach to making software engineering decisions. It is based on the core idea that gathering information from a large data set is best done through tools. At the same time, these tools have to be customized to the specifics of the system. This is why it is necessary for engineers to be able to build these tools during development, as they are the only ones that know what their system needs.

To make this practical we need a new breed of platforms that enables developers to craft tools fast and inexpensively. This is where Moose comes in. Through Moose we show that this proposition can be realized in practice, and I argue that the kinds of features offered by the platform should become common in IDEs.

Tools are necessary, but they are not sufficient. That is why, Humane Assessment offers a guide both to identify the needed engineering skills and to find a way to affect the development process. For example, given that the structure of the system shifts continuously with every commit, it is necessary for the development team to observe and steer the overall architecture on a daily basis. To this end, Humane Assessment recommends a daily assessment standup through which technical problems that have been previously made explicit by means of custom analyses get discussed and corrected.

More information about Humane Assessment can be found at: humane-assessment.com If you were to write tomorrow an article on software, what would its title be? [Tudor] Software environmentalism.

What is your perspective on the evolution of technology over the next 10 years and its impact on people’s everyday lives? [Tudor] Technology will have an ever increasing role in our daily life. This is an obvious direction. Less obvious are the consequences.

We produce software systems at an ever increasing rate. On the one hand, this is a good thing. On the other hand, our ability to get rid of older systems does not keep up with that pace. Let’s take an example: a recent study showed that there are some 10’000 mainframe systems still in use. These systems are probably older than most of the developers. This shows us that software is not as soft as we might think. Once created, a system produces consequences even at a social and economic level. The less flexible it is, the more its position strengthens with time and makes it almost impossible to be thrown away. The only options remain reconditioning and recycling.

Because of the spread and impact of the software industry, we need to look at software development as a problem of environmental proportions. Systems must be built in a way that allows us to easily disassemble it in the future. As builders of the future world, we have to take this responsibility seriously.

Conference

Sponsors

  • ntt data
  • 3PillarGlobal
  • Betfair
  • Telenav
  • Accenture
  • Siemens
  • Bosch
  • FlowTraders
  • MHP
  • Connatix
  • UIPatj
  • MetroSystems
  • Globant
  • MicroFocus
  • Colors in projects