WPF WebBrowser control – part 1

Introduction

Recently I spent a lot of time developing a .NET WPF application that embeds a web browser (don’t hold me guilty for that, it’s the client’s requirement). Since the standard WPF WebBrowser has a lot of limitations and problems, I had to find several solutions that are not perfectly standard or ideal for the WPF world. This series of posts presents a summary of those solutions that I find particularly interesting. The majority of the material presented here has been taught to me by other developers or gathered in the Internet, though some solutions are invented by me.

Limitations

I found 2 main limitations of the WPF WebBrowser control:

  1. It is not well suited for the MVVM pattern. None of its functionalities is accessible through Dependency Properties, so you have to wrap it somehow if you want to employ it in a MVVM based design. In my case, a decent part of the work has been building an adapter in order to use the WebBrowser in conjunction with the Caliburn.Micro framework.
  2. The set of properties and methods exposedbytheWPF control is extremely limited. When you are using theWebBrowser control you are actually using aWPF wrapper of Internet Explorer. However, it hides a lot of functionalities that the actual underlying implementation has. If you want to have the full (?) power of IE at your disposal, you have to work your way around to get to the underlying COM object. The capabilities that are not visible via the .NET interface includes, but are not limited to, the following:
    1. Use a different IE version (rather than the default IE7);
    2. Prevent the browser from showing annoying dialog windows when a Javascript error occurs;
    3. Attacching .NET handlers to Javascript events;
    4. Injecting HTML and Javascript from .NET code;
    5. Calling Javascript function from .NET code;

Using the Internet Explorer 11 engine and not the IE 7 one

One of the easiest tricks to perform is also one of the subtlest. I don’t remember where I learned it but I definitely did not find it myself. To make sure that the actual Internet Explorer component used inside your WPF application is the one from the version 11 (or possibly higher, in the near future), you have to add a couple of keys to the Windows Registry. Open the editor (run regedit) and go to the branch with path: HKEY_CURRENT_USER\Software\Microsoft\Internet Explorer\Main\FeatureControl\FEATURE_BROWSER_EMULATION (Create the folder if it does not exist). Add a new DWORD value for each executable where the WPF WebBrowser control must use IE11, just like this: reg-ie11

The key of the entry must be equal to the name of the executable file of your interest, while the value encodes the version of IE that the corresponding app will use (the hexadecimal value 0x00002af9 for version 11, in this case). Note that some applications may already be there (I had the Acrobat Reader executable that I left there as an example). Also note that if you are developing the application with Visual Studio you may also want to add the name of the executable launched inside the IDE when debugging (the entry with the .vhost.exe suffix ).

How to inject Javascript into HTML documents

First of all, you are going to need some extra dependencies so better add them immediately. Look at the picture below: you must add the mshtml library. You should find it in the Assemblies -> Extensions section of the Reference Manager dialog window.

ref-01

Now let’s suppose that you have a class (tipically a Window or a UserControl) that uses a WebBrowser. You should have something that looks like this: (The XAML)


<UserControl x:Class="ExampleWebBrowserWrapper"
 xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
 xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
 xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" 
 xmlns:d="http://schemas.microsoft.com/expression/blend/2008" 
 mc:Ignorable="d" 
 d:DesignHeight="300" d:DesignWidth="300">

    <Grid>
        <WebBrowser x:Name="webBrowserControl"></WebBrowser>
    </Grid>

</UserControl>     
        

In order to inject a Javascript script into the HTML page, so that it is interpreted and possibly executed in the context of the page, you can implement a method like the following in the ExampleWebBrowserWrapper class (the rest of the class code is ignored for brevity’s sake):


using mshtml;
/* other "using" are omitted */

public partial class ExampleWebBrowserWrapper : UserControl
 {

 public void InjectScript(String scriptText)
 {
 HTMLDocument htmlDocument = (HTMLDocument)webBrowserControl.Document;

 var headElements = htmlDocument.getElementsByTagName("head");
 if (headElements.length == 0)
 {
 throw new IndexOutOfRangeException("No element with tag 'head' has been found in the document");
 }
 var headElement = headElements.item(0);

 IHTMLScriptElement script = (IHTMLScriptElement)htmlDocument.createElement("script");
 script.text = scriptText;
 headElement.AppendChild(script);
 }

}


Then you can call Javascript function (the ones you injected or the ones already contained in the HTML page) from .NET code by calling the InvokeScript method of the WebBrowser class.

        public void InvokeScript(String javascriptFunctionName)
        {
            try
            {
                webBrowserControl.InvokeScript(javascriptFunctionName);
            }
            catch (System.Exception ex)
            {
                /* Handle Exception */;
            }
        }

ANTLR4 project with Maven – Tutorial (episode 3)

[Episode 1] [Episode 2] Now for something completely different. During the preparation of episode 3 I changed my mind and thought that the best way to approach the remaining issues (self embedding and AST) was to embrace a classic. So I decided to fall back to the good old arithmetic expressions, since they have proven to be the best didactic test bed for me. I developed a new project from scratch that you can find here on Github. It’s nothing more that an interpreter of arithmetical expressions, built using ANTLR4 and Maven (of course). The projects also contains an object model for an Abstract Syntax Tree (AST) that fits my (our) needs. Please keep in mind that the focus of this episode is on how to define an object model and build an AST for the language. I will take for granted a lot of things that do not fall into this topic. If anything is not clear, well, there is always a comment section… oh and by the way… Disclaimer: This is not a proposal for a best practice. This is just a sharing of a toy project that I made up because I could not find anything similar.

The (labeled) grammar

The grammar that will be used is just a minor variation of the example found here.

grammar Arithmetic;

program : expression ;

expression
	: expression ('*' | '/') expression #Multiplication
	| expression ('+' | '-') expression #AlgebraicSum
	| term #AtomicTerm;

term: realNumber #Number
	| '(' expression ')' #InnerExpression;

 realNumber : NUMBER ('.'NUMBER)?;

WS : [ \t\r\n]+ -&gt; skip ; // skip spaces, tabs, newlines

NUMBER : [0-9]+ ;

What I added is:

  1. Labels (those identifiers preceded by ‘#’);
  2. The missing operands (in the example there are only sum and multiplication).

Labels allow you to name a production out of a set of alternatives, so that you can discriminate among them in the visitors. Let’s take the rule for expression at line 5 as an example: it will produce a visitor interface that contains the following signatures, rather than just a single ‘visitExpression’ method:

        /**
	 * Visit a parse tree produced by the {@code AlgebraicSum}
	 * labeled alternative in {@link ArithmeticParser#expression}.
	 * @param ctx the parse tree
	 * @return the visitor result
	 */
	T visitAlgebraicSum(ArithmeticParser.AlgebraicSumContext ctx);
	/**
	 * Visit a parse tree produced by the {@code Multiplication}
	 * labeled alternative in {@link ArithmeticParser#expression}.
	 * @param ctx the parse tree
	 * @return the visitor result
	 */
	T visitMultiplication(ArithmeticParser.MultiplicationContext ctx);
	/**
	 * Visit a parse tree produced by the {@code AtomicTerm}
	 * labeled alternative in {@link ArithmeticParser#expression}.
	 * @param ctx the parse tree
	 * @return the visitor result
	 */
	T visitAtomicTerm(ArithmeticParser.AtomicTermContext ctx);

Remember that either you label all the alternatives in a production or none: ANTLR does not allow you to name only a few. The ‘expression‘ production introduces two more concepts: self embedding and left recursion. Self embedding happens when a symbol is capable of producing itself. In this case expression does this both directly (as in the AlgebraicSum and Multiplication alternatives) and indirectly (through the term production, with the alternative named InnerExpression). While self embedding is perfectly natural in a programming language (in fact you cannot express nested arithmetical expression without it) and it is, in fact, the characteristic that distinguishes context free languages from regular expressions, left recursion may be a problem for LL parser like the one we are going to build. With JavaCC, for example, you would not be allowed to write a production like expression : expression ‘+’ expression. ANTLR, on the other hand, is able to recognize and resolve direct left recursion. As a desirable consequence of the strategy adopted by ANTLR, the precedence of the productions (which means the resulting precedence of the arithmetical operators) is given by the order in which the alternatives are listed. For example, in our production the Multiplication alternative will have a higher precedence than AlgebraicSum and a string like:


1 + 2 * 3

will produce a parse tree that looks like this (edit — snapshot of the ANTLR plugin for Eclipse):

parsetree1

You have to be aware of this behavior, otherwise you could end up doing the error I did in the first version of my grammar. Initially I wrote the productions in the following manner:

/* Warning: BROKEN GRAMMAR! Do not do this */
expression
	: expression '+' expression #Sum
	| expression '-' expression #Difference
	| multiplicativeExp #Term;

multiplicativeExp
	: multiplicativeExp '*' multiplicativeExp #Multiplication
	| multiplicativeExp '/' multiplicativeExp #Division
	| NUMBER ('.'NUMBER)? #Number
	| '(' expression ')' #InnerExpression;

In this version Sum has a higher precedence than Difference, and Multiplication has precedence over Division: this is not what we want to do.

In this instance, if you parse:


2 + 3 - 5 + 6

you get:

parsetree2

Not quite the right tree.

A “naive” interpreter

The first attempt to interpret the expressions will be a visitor that operates on the concrete parse tree. I call it “naive” because you do not need to define an AST object model: you just traverse the parse tree and “manually” skip all the productions and terminals that you do not care about. The implementation of such a visitor is in the NaiveInterpreterVisitor class. To get an idea, you visit the nodes in the following way:


	public Double visitAlgebraicSum(AlgebraicSumContext context) {
		String operand = context.getChild(1).getText();
		Double left = context.expression(0).accept(this);
		Double right = context.expression(1).accept(this);
		if(operand.equals(&quot;+&quot;)){
			return left + right;
		}
		else if(operand.equals(&quot;-&quot;)){
			return left - right;
		}
		else
		{
			throw new ArithmeticException(&quot;Something has really gone wrong&quot;);
		}
	}

Here we first find what the operator is (remember that the AlgebraicSum node could store a sum or a difference): to do that (line 2) we get the second child of the current node (we know it must be a terminal) and then we get its text representation. In order to find out what the values of the left and right operands are, we pass the visitor (this) into their ‘accept’ method. Note that every “Context” object has convenient methods that correspond to the parsing functions of the related production, so that we can skip the noise of the terminals we do not like to interpret and go straight to the child nodes we care about. As a further example, consider the following method:

	@Override
	public Double visitInnerExpression(InnerExpressionContext context) {
		return context.expression().accept(this);
	}

Knowing how the corresponding production is defined

term: realNumber #Number
	| '(' expression ')' #InnerExpression;

we can skip the ugly parenthesis and immediately pass the visitor into the meaningful subtree.

The AST object model

Depending on the application needs, you may want an AST to represent the parsed strings. Unfortunately, starting from version 4, ANTLR does not provide you anymore with anything to automate this part of the process. Unlike other language recognition tools, it does not generate classes for the AST nodes based on some kind of annotations in the grammar file, nor it makes a parser that is able to build an AST instead of a concrete parse tree (These are things that you can do with JJTree + JavaCC). As I mentioned, It seems to be an aware design decision and this is where we left at the end of the previous episode. In order to work with ASTs, I went through the following steps:

  1. defining an object model for the AST;
  2. writing a visitor of the parse tree that produces a corresponding AST object.
  3. defining a visitor interface for the AST and writing an interpreter that implements that interface.

The object model is very straightforward for our little language. I put the classes in the org.merka.arithmetic.language.ast package. I would also put here a better UML class diagram of the model if I knew a tool that would not take me three hours. Here’s the best that I can do in terms of UML drawing:

ArithUML

Building the AST with a visitor of the concrete parse tree

The approach I took is to use a parse tree visitor to build the AST. You can also use ANTLR listeners instead of visitors, I think it depends much on your needs and personal taste. The builder visitor is implemented in the ASTBuilderVisitor class. It traverses the tree much like the naive interpreter does, skipping all the terminals and the productions that are not meaningful for the abstract syntax. This is an example of an AST node construction:

	@Override
	public ArithmeticASTNode visitAlgebraicSum(AlgebraicSumContext context) {
		String operand = context.getChild(1).getText();
		ArithmeticASTNode leftOperand = context.expression(0).accept(this);
		ArithmeticASTNode rightOperand = context.expression(1).accept(this);
		if(operand.equals(PLUS)){
			return new SumASTNode(leftOperand, rightOperand);
		}
		else if (operand.equals(MINUS)){
			return new DifferenceASTNode(leftOperand, rightOperand);
		}
		else{
			throw new ArithmeticException(&quot;Something has really gone wrong: operand '&quot; + operand +&quot;' comes as a complete surprise&quot;);
		}
	}

As you can see, it’s almost identical to its interpreter counterpart.

The AST based interpreter

Finally we can define a Visitor interface based on the AST:

public interface ArithmeticASTVisitor {

	public Number visitDifference(DifferenceASTNode differenceNode);
	public Number visitDivision(DivisionASTNode divisionNode);
	public Number visitMultiplication(MultiplicationASTNode multiplicationNode);
	public Number visitSum(SumASTNode sumNode);
	public Number visitNumber(NumberASTNode numberNode);
}

Nothing weird here: there is just one method for each concrete node type. We can now write the interpretation methods in a concrete visitor, here’s an extract:

	@Override
	public Number visitSum(SumASTNode sumNode) {
		Number leftOperand = (Number)sumNode.getLeftOperand().accept(this);
		Number rightOperand = (Number)sumNode.getRightOperand().accept(this);
		return leftOperand.doubleValue() + rightOperand.doubleValue();
	}

See the class EvaluationVisitor for the complete code. A side note: an obvious improvement would be to rewrite the grammar so that Sum and Difference, as well as Multiplication and Division, are in different productions, thus producing nodes of different type in the parse tree. That way we could avoid the ugly if – else in the visitSum and visitMultiplication method.

ANTLR4 project with Maven – Tutorial (episode 2)

[The reference tag for this episode is step-0.2.]

At the end of the previous episode we have been able to feed sentences to the parser and find out if they are valid (i.e. belong to the language) or not. In this post I will show you how you can implement a visitor to interpret the language.

At the end of every successful parsing operation, the parser produces a concrete syntax tree. The function parse<StartSymbol> of the parser returns an object of type <StartSymbol>Context, which represents the root node of the concrete syntax tree (it is a structure that follows the classic Composite pattern). In our case, take a look at the testJsonVisitor test method (forget about the “Json” part of the name, the method is named like this by mistake):


 @Test
 public void testJsonVisitor() throws IOException{
    String program = "sphere 0 0 0 cube 5 5 5 sphere 10 1 3";
    TestErrorListener errorListener = new TestErrorListener(); 
    ProgramContext context = parseProgram(program, errorListener);
 
    assertFalse(errorListener.isFail());
 
    BasicDumpVisitor visitor = new BasicDumpVisitor();
 
    String jsonRepresentation = context.accept(visitor);
    logger.info("String returned by the visitor = " + jsonRepresentation);

...
 
 }

After parsing the string, the test method instantiates a visitor object (BasicDumpVisitor) and provides it with the ProgramContext object as the input.

Let’s take a closer look at the visitor. If you use the -visitor option when calling the preprocessor, ANTLR, alongside with the parser, generates for you a basic interface for a visitor that can walk the concrete syntax tree. All you have to do is implementing that interface and make sure that the tree nodes are visited in the right order.

I created the BasicDumpVisitor class, the simplest visitor that I could think of: it walks the tree and creates a string (also known as “a program”) that, once parsed, gives back the visited tree. In other words it just dumps the original program that created the current contrete syntax tree.

The base visitor interface is declared as follows:


public interface ShapePlacerVisitor<T> extends ParseTreeVisitor<T>

The name’s prefix (ShapePlacer) is taken from the name of the grammar, as defined in the grammar source file. The interface contains a “visit” method for each node type of the parse tree, as expected. Moreover, it has a bunch of extra methods inherited by the base interface ParseTreeVisitor: see the source code to get an idea, they are quite self-explanatory. I report here one of the “visit” methods as an example. The other ones in the class follow a similar logic:


	public String visitShapeDefinition(ShapeDefinitionContext ctx) {
		StringBuilder builder = new StringBuilder();
		for (ParseTree tree : ctx.children){
			builder.append(tree.accept(this) + " ");
		}
		builder.append("\n");
		return builder.toString();
	} 

The interface is parametric: when you implement it, you have to specify the actual type: that will be the return type of each “visit” method.

So far, I have always written about “concrete syntax tree”. When we deal with interpreted languages, we usually want to manipulate an Abstract Syntax Tree (AST), that is, a tree structure that omits every syntactic detail that is not useful to the interpreter and can be easily inferred by the structure of the tree itself. In the case of our language, if we have, say, the string “sphere 1 1 1”, the parser creates for us a tree that looks like this:

  • program
    • shapeDefinition
      • sphereDefinition
        • SPHERE_KEYWORD
        • coordinates
          • NUMBER [“1”]
          • NUMBER [“1”]
          • NUMBER [“1”]

That is not ideal since, when it comes down to interpretation, we may want to work with something that looks like this:

  • sphereDefinition
    • coordinates
      • 1
      • 1
      • 1

or, depending on your needs, something even simpler, for example:

  • sphere definition
    • (1, 1, 1)

Unfortunately ANTLR 4, unlike its previous versions, does not allow for tree rewriting in the grammar file. As far as I know, this is a precise design decision. So, if you want to work with an AST of your invention, you have to build it yourself, i. e. you have to write the tree node classes and the code that traverses the concrete syntax tree and builds the AST. Hopefully, I will cover this case in the next episode.

ANTLR4 project with Maven – Tutorial (episode 1)

Introduction

I’ve always been fascinated by Language Theory and the related technologies. Since I have been prevalently a Java guy, I used to use Javacc and JJTree to build parsers and interpreters. Nowadays it seems that the big name in the field of language recognition is ANTLR. I have wanted to learn more about ANTLR for a long time and lately I finally had the opportunity to spend some time on it. I thought it would have been a good idea to share the sample projects I created in the process.

I plan to write at least three parts:

  1. Setup of the project with all the basic pieces working.
  2. Implementation of a visitor.
  3. Grammar refinement to include self-embedding and implementation of a second visitor.

Through this series I will design a language to give the specification of the position of some geometrical shapes that will be used later to add shapes to the gravity simulator 3D scene (at least this is the idea).

The whole source code is available for download at https://github.com/Rospaccio/learnantlr. The project contains some tags that are related to the various episodes of the tutorial (unfortunately not always with the corresponding sequence number, but I will make sure to reference the right tag for each episode).

Disclaimer: this is not a comprehensive guide to ANTLR and I am not an expert in the field of Language Theory nor in ANTLR. It’s just a sharing of my (self) educational process. The focus is almost entirely on the setup of a build process through Maven, not on the internals of ANTLR itself nor the best practices to design a grammar (though I will occasionally slip on that topics).

Project setup (Git tag: v0.1)

Outline:

  1. first version of the language;
  2. basic pom.xml;
  3. specification of the grammar;
  4. first build.

First version of the language

the first version of the language is going to be very trivial and it is supposed to be just a pretext to show how a possible pom looks like. We want to be able to recognize strings like the following:

cube 0 0 0
sphere 12 2 3
cube 1 1 1
cube 4 3 10
<etc...>

where the initial keyword (“cube” or “sphere”) specifies the nature of the shape and the following three number specify the coordinates of the shape in a three dimensional space.

Basic POM

ANTLR has a very good integration with Maven: every necessary compile dependency is available from the central repository. Plus, there’s a super handful plugin that invokes the antlr processor, thus it is possible to define and tune the entire build process through the pom.xml file. But enough of these words, let’s vomit some code.

You can start the project as a default empty Maven project with jar packaging. First, you need to add the ANTLR dependency in your pom.xml file. Here’s the fragment:

<properties>
	<antlr4.plugin.version>4.5</antlr4.plugin.version>
	<antlr4.version>4.5</antlr4.version>
</properties>
<dependencies>
	<dependency>
		<groupId>org.antlr</groupId>
		<artifactId>antlr4-runtime</artifactId>
		<version>${antlr4.version}</version>
	</dependency>

	<dependency>
		<groupId>org.antlr</groupId>
		<artifactId>antlr4-maven-plugin</artifactId>
		<version>${antlr4.plugin.version}</version>
	</dependency>
</dependencies>

Here I am using version 4.5, which is the latest available at the time I am writing, because it supports Javascript as a target language, a feature that I am going to use later in the tutorial.

The first dependency, antlr4-runtime, as the name suggests, is the runtime support for the code generated by ANTLR (basically it’s what you need to compile the generated code and execute it). It contains the base types and classes used by the generated parsers.

The second, antlr4-maven-plugin, is the plugin that can be used in the “generate-sources” phase of the build. To actually use it, the following fragment is also needed:

<build>
	<plugins>
		<plugin>
			<groupId>org.antlr</groupId>
			<artifactId>antlr4-maven-plugin</artifactId>
			<version>${antlr4.plugin.version}</version>
			<configuration>
				<arguments>
					<argument>-visitor</argument>
					<!-- <argument>-Dlanguage=JavaScript</argument> -->
				</arguments>
			</configuration>
			<executions>
				<execution>
					<goals>
						<goal>antlr4</goal>
					</goals>
				</execution>
			</executions>
		</plugin>
	</plugins>
</build>

Note that you can pass argument to the ant command: I user the -visitor option because it generates a handful interface that you can implement in a Visitor class for the parse tree.

Specification of the grammar

In order to have something that makes sense, let’s add a grammar file in the appropriate folder. Create the file (in this case ShapePlacer.g4) inside src/main/antlr4. Make also sure to build a folder structure that mimic the package structure that you want for the generated classes. For example, if you place the grammar file inside src/main/antlr4/org/my/package, the generated classes will belong to the package with name org.my.package.

Here’s our first grammar:

grammar ShapePlacer;
program : (shapeDefinition)+ ;
shapeDefinition : sphereDefinition | cubeDefinition ;
sphereDefinition : SPHERE_KEYWORD coordinates ;
cubeDefinition : CUBE_KEYWORD coordinates ;
coordinates : NUMBER NUMBER NUMBER ;
SPHERE_KEYWORD : 'sphere' ;
CUBE_KEYWORD : 'cube' ;
NUMBER : [1-9]+ ;
WS : [ \t\r\n]+ -> skip ; // skip spaces, tabs, newlines ;

Not a very interesting language, but we must only try and see if the build works.

First build

In order to do that, type mvn clean package in a terminal window and see what happens.

What’s happend

During the generate-sources phase of the Maven build (i.e. certainly before compile), the ANTLR plugin is activated and its default goal (“antlr4”) is called. It invokes the antlr4 processor on the grammar file (by default, it looks recursively inside src/main/antlr4 and compiles every .g4 files it finds). If the goal executes with no error, the generated source files are placed in target/generated-sources/antlr4, and they are automatically taken into account for the compile phase.

As we do not have any manually-written source file yet, only the generated files are compiled and included in the jar.

Test

Let’s try and test the parser. To do that we can add a JUnit test case with a test that looks like the following (please browse the source code to find out more about details like the TestErrorListener class):

@Test
	public void testExploratoryString() throws IOException {

		String simplestProgram = "sphere 12 12 12 cube 2 3 4 cube 4 4 4 sphere 3 3 3"

		CharStream inputCharStream = new ANTLRInputStream(new StringReader(simplestProgram));
		TokenSource tokenSource = new ShapePlacerLexer(inputCharStream);
		TokenStream inputTokenStream = new CommonTokenStream(tokenSource);
		ShapePlacerParser parser = new ShapePlacerParser(inputTokenStream);

		parser.addErrorListener(new TestErrorListener());

		ProgramContext context = parser.program();

		logger.info(context.toString());
	}

Lubuntu on VMWare Tutorial – “Behind a Proxy” Edition

A Lubuntu virtual machine on VMWare Player is currently one of my standard tools for developing and testing software. I had a Hard Time (c) the first time I installed and fully configured such an instance because I was (suspense, suspense…) behind a proxy! The need for this setup has been triggered by the fact that I like my VMWare machines to be able to resize the desktop as I resize the VMWare window that contains them. That feature is available in most Linux distros only if you install the VMWare-tools package on the guest machine. That, in turn, requires the build-essential package (gcc, make, and the like).

Here’s the outline:

  1. set the system-wide proxy;
  2. tell apt to use the proxy (’cause no, it won’t use the fucking system proxy);
  3. update and upgrade apt;
  4. install build-essential with apt-install;
  5. install VMWare-tools.

Now that I read it, it seems like an easy thing to do, but it took me some time to have all the pieces set up, so I guess it’s worth to write a brief tutorial to share what I have learned. Of course, I did not find out all these things by myself: this a just a summary of the knowledge that I gathered from the Internet (thank you, Internet).

1. Set the system-wide proxy.

Interestingly, Lubuntu does not have a nice GUI to let you set the proxy, so you have to do it some other way. For me, the best way is to just set the appropriate variable inside the /etc/environment file, so that it is shared across all users. Since it is going to be a development box I do not care about which user logs in at all. I just want the variable available and I want it fast.

to do that, open the aforementioned file with:

sudo nano /etc/environment

or with your favourite text editor (I am sorry if it’s vim), and make sure that the following lines are added:

http_proxy=http://101.101.101.101:3456
https_proxy=http://101.101.101.101:3456
ftp_proxy=http://101.101.101.101:3456
no_proxy="localhost,127.0.0.1"

(As you might imagine, you have to replace the fake IP addresses and ports with your proxy’s ones). To make the change effective you must log out and log in again.

2. Tell apt to use the proxy.

Because otherwise it won’t. Open the file /etc/apt/apt.conf for edit. If it does not exist, create it. Add the following lines:

acquire::http::proxy "http://101.101.101.101:3456"
acquire::https::proxy "https://101.101.101.101:3456"
acquire::socks::proxy "socks://101.101.101.101:3456"
acquire::ftp::proxy "ftp://101.101.101.101:3456"

again, replace the “101” and “3456” placeholders with your actual addresses and ports, and save it.

3. update and upgrade apt.

Run the following command to make apt up to date:

sudo apt-get update
sudo apt-get upgrade

4. Install build-essential with apt-install.

Now you are ready to use apt commands behind your proxy. Type

sudo apt-get install build-essential

to install the necessary build tools.

5. Install VMWare-tools

Now that you have all the prerequisites, you are ready to install the VMWare-tools. Select the menu item as in the picture below, follow the instructions that VMWare Player prompts to you, and you should be fine.

player-tools

Stress reducing tools and methodologies

A brief list of things that made me a better developer and a less anxious person.

Test Driven Development: Even if nobody in your team is doing TDD and your manager thinks it is just building a servlet for each back end Web Service, you can start applying TDD today and become a better developer. Of all the good natural consequences of TDD, what I like most is its stress reducing effect. If I feel afraid that something might be broken or it might fail in production, I just add more and more test cases, until every edge case is covered. I no longer wake up in the middle of the night worried of what could happen tomorrow when the servers will be restarted. Everything can still go wrong like it used to, but your level of confidence in your code and your reaction speed are greatly improved. You just feel better. And a fix is usually much easier to implement than without tests. Let alone the fact that stupid bugs will actually appear far less frequently than before…

Git: I hated the beast and avoided it like hell until I found an illuminating video on Youtube where a clever guy managed to explain with great clearness how Git actually works and how you can use it effectively. That has been a turning point. I realized that I was unable to use it because of lack of understanding. And once you see what branching and merging really means, you feel powerful, and a whole new world of possibilities unfolds before your eyes. It’s like living in three dimensions after having spent your whole life in Flatland. As of TDD, you do not have to wait until your company understands that leaving SVN and transitioning to Git is the right thing to do (Here I don’t even want to take into consideration SourceSafe, ClearCase or other hideous abominations): you can start using it today. Just “git init” a repository inside the root directory of a project; it does not matter if it’s under SVN source control, if you “gitignore” the right things the two do not interfere which each other. And your are ready to go. Now I wonder how could I have lived so long without Git.

Maven: you can say that it is verbose, it is slow, it eats up a lot of disk space, it’s ugly… I don’t care. I have personally seen what a build system without proper dependency management could be and what can cost in terms of time, money and stress: it’s not even a build system, it’s a destruction system. Maven is currently my default. There is only one thing that pisses me off more than a project not using Maven: one that uses it badly. If a person is not able to checkout a project and run a clean build at his first try, you are doing something wrong.

Sonarqube: A Wonderful free tool that helps you improve your code. It’ a bunch of tools that perform static analysis of code, integrated in a web application that keeps track of how the various parameters of the projects evolve from build to build. You can always learn something from the issues detected by Sonarqube and their relative descriptions. And it feels good to see how the color of a project shifts from red, to yellow, to green as you become a better programmer.

Virtual Machines: This is incredibly important, fundamental, if you happen to work in a hybrid environment. A usual situation for me is having a development machine running Windows and a deployment environment (for test, UAT, production, etc…) completely based on Linux. This is not so strange if you work with JavaEE: most applications and systems actually behave in the same way in Windows and Linux… almost… That is why you always want to give them a spin in a Linux box, before releasing it. After trying almost every virtualization software, my current choice is VMWare Player + Lubuntu. The first is available free of charge for non commercial use and works surprisingly well, the second is a lightweight Linux distro based on Ubuntu that gets rid of the super ugly desktop environment of Canonical and replaces it with LXDE, which requires few resources and performs well in virtual machines and older computers.

Fear Driven Development – Enterprise Induced Worst Practices (part 0)

The Internationalization Antipattern

Some years ago I was still working for Big Time Consulting but I was not even a proper employee. I was a contractor from a company owned by BTC. Well, you figure out the real names. I was sort of an in shore indian developer. We had this huge system built upon Liferay. The system was composed of hundreds of portlets, scattered across tens of WARs. The portal was heavily customized. The language strings for every portlet were all defined in a single Language.properties file at the portal level. That’s right: WARs did not have their own Language file: everything was defined centrally. That meant that if you needed to change the label of a button, you had to modify the portal Language file, build an artifact that belonged to the architectural sector of the system (i.e. it impacted ALL the portlets) and then, once deployed, restart the servers involved in the process.

Nowhere along this path there was an automated test.

As you might imagine, quite often things went wrong. The less severe issue that you could get was the total replacement of the Language strings with their corresponding keys (that was the default behavior in that version of Liferay: if the string was not found, it was simply set to its key). So, after the reboot, everything on every page looked something like “add.user.button.text”, “customer.list.title”, “user.not.found.error.message” and so on. Everywhere. In every single application. The default reaction in the team was “The Strings have been fucked up. Again.”

On the extreme end of the spectrum there was a funny set of poltergeist bugs. Mysterious NoClassDefFoundError, ClassCastException, Liferay showing a bare white page, Liferay showing every graphical component in the wrong place, portlets not deploying, etc…

After being forced to spend a couple of looong evenings to fix this issue (Did I mention that the entire process of compiling, packaging and deploying was excruciatingly long?) I learned my lesson: never mess with the strings again. I decided to apply my personal internationalization antipattern: always include a default value for every string with

LanguageUtil.get(Locale locale, String key, String defaultValue)

and don’t even try to package the architectural artifact (AA, from now on). Just modify and commit the Language file. Then deploy the WAR: the next day the strings magically appear on the screen, and nobody will ever notice that they are hardcoded. Wait until the next release cycle of the AA to have the strings file available. Luckily you won’t be the one needing to deploy it so, if something goes wrong, you can blame someone else and save your evenings.