The result is a tension between old-fashioned bureaucracy and modern development practices; the need to prove you have a rigorous process in-place and the desire to push frequent releases to production without wasteful paperwork.
The Monetary Authority of Singapore (MAS) is Singapore’s central bank and regulatory authority. If you want to do business in Singapore, you need to comply with their regulations. They’ve published what they call the Technology Risk Management (TRM) Guidelines to help with “the adoption of sound practices and processes for managing technology”. Each financial institution with an interest in Singapore is expected to adopt these guidelines with the following caveat.
“Financial institutions (FIs) may adapt these guidelines, taking into account the diverse activities they engage in and the markets in which they conduct transactions. FIs should read the Guidelines in conjunction with relevant regulatory requirements and industry standards.”
One area of the TRM guidelines talks about the need to conduct code reviews (section 6.3). Each institution must interpret these guidelines and be comfortable articulating how the manage the source code review process. The TRM guidelines don’t say how to conduct reviews and doing effective code reviews is hard.
Regulators struggle to keep up with modern software development practices and how they can help give confidence that guidelines are being followed. They struggle to assess the impact of things like containerised deployments and having data on the public cloud. For source code reviews, each institution must adapt the guidelines to fit their development practices but at a minimum, each should be able to provide evidence.
So how do we evidence that code reviews have taken place? A traditional process would be to conduct an out-of-band code review (after the coding task has been completed) and document it via some tool like Codacy or Crucible. GitHub (and GitHub Enterprise) have nice integrated review tools and BitBucket have something similar. These are often combined with the act of merging a Pull Request, i.e. you do the code review when you accept a PR.
Tools that combine merging branches with code reviews conflate two distinct ideas (managing your source code and peer reviewing source code). If you do both actions together, your branching model has been chosen for you and doing trunk based development (TBD) becomes harder. If you do try for TBD and use short lived branches purely to facilitate code reviews, you’re introducing waste (albeit the cost may be reclaimed somewhat by the tooling that motivated the decision).
Trunk based development has lots of advantages in it own right, not least of which is that it offers continuous integration (CI). CI has been around since the early 90s and helps reduce the feedback loop for dev teams. Compared to long running feature branches where by definition, integration takes place intermittently and infrequently, it helps find potential problems early.
“Continuous Integration doesn’t get rid of bugs, but it does make them dramatically easier to find and remove.”
Pair Programming as an alternative to traditional out-of-band code reviews offer significant advantages. Again, mostly this is around shortening the feedback loop and fixing potential problems early. If code reviews are a good idea, why not do them all the time? When you’re actually coding and not when you’ve finished?
So how do you practice trunk based developed, utilise Pair Programming and still provide evidence that you’ve had multiple developers work on a single piece of code? Perhaps “crypto-evidence” can help.
Crypto-evidence is a term I made up for this post but it implies that we should be able to supply evidence that can be proven via cryptographical means. With traditional techniques, a code reviewer can be proven to be who she says she is (and that she’s a different person than the original author) by the login credentials used with the review and VCS tools. These would be encrypted and so (in theory) can not be subverted or faked. Assuming the rest of the review is locked down, we have our evidence.
Git is probably the most widely used VCS today (some figures suggesting it’s used by 70% of projects). It offers a feature called signed commits whereby the person committing code digitally signs the commit with GPG. This can be retrieved (along with the author information) and, in theory, prove a different individual “signed” a commit than it’s author. This signature could be used as a sign-off for a code review as only the signatory knows her passphrase. Just use -S
(not -s
):
|
…and verify with git log --show-signature
:
|
The key line includes Signature made ... using RSA key 39E273602
. Job done? Not quite…
Assuming we want to prove Pair Programming was used, we need to show an alternative author from the signatory. We can do that either supplying the --author
argument or (more practically) sharing the team’s private keys securely and signing based on who’s at the computer.
On my computer (the default key is set with git config --global user.signingkey 39E273602
) and with Barry pairing:
$ git commit -S --author="Barry <barry@badrobot.com>" -m "commit from Toby's machine with Barry as the author"
Or, if each machine has every developer’s private key installed in GPG, we can rotate developers and sign from any machine.
From Barry’s machine (with Toby pairing):
$ git commit -Stoby -m "commit as Barry with Toby signing the commit"
Both would result in something like the following.
|
Showing Toby as the signatory (code reviewer) and Barry as the the author. Cryptographically provable that two developers worked on the code.
When you force each commit to be signed (with git config --global commit.gpgsign true
), you should be able to build a report from the Git log showing every commit was paired on and so code reviewed.
|
To share private keys, export your key (you may choose to share this via Git alongside your source code).
|
…and import it on each developer’s machine.
|
Just pass in the name you used when creating the key to the -S
argument. In my case -Stoby
.
Unfortunately, this technique doesn’t come without problems.
Rebasing will rewrite a commit after pulling down changes. As it attempts to rewrite the commit, it will take your commit.gpgsign
setting into account and potentially overwrite someone else’s signature with your own. If the setting is false
, it will appear as if there was no signature at all. This is potentially a deal breaker as rebasing (even from a single branch, as is the case for TBD) is a very common use case. Even when merging rather than rebasing, it only takes one rebase to subvert the entire process.
IDE support is also limited. IntelliJ IDEA doesn’t support it (vote for the issue here) so you’re left to use the command line for commits. Your VCS system might also cause your problems - your organisation might reject commits where the --author
has changed which may limit your options.
Git’s signed commit feature was never meant for this. It’s really a way to prove a commit came from who it was claimed to come from. It was motivated by large open source projects where the authenticity of commits is required (for example, for merging only from trusted authors). We’re trying to misuse the feature here and we get someway in achieving our goals with it. It just breaks down in a few key areas.
A warning from Git’s own documentation:
Signing tags and commits is great, but if you decide to use this in your normal workflow, you’ll have to make sure that everyone on your team understands how to do so. If you don’t, you’ll end up spending a lot of time helping people figure out how to rewrite their commits with signed versions. Make sure you understand GPG and the benefits of signing things before adopting this as part of your standard workflow.
I’d love to know you’re experiences with TBD, Pair Programming and evidencing source code reviews, let me know below.
With the following alias, you can print a nice one liner per commit including who signed it.
|
|
To test your GPG program, run the following.
|
If the output includes the following.
|
You may need to run the following (I did on MacOSX).
|
.deb
files and deploying via apt
.
The basic approach:
.deb
package using sbt
and the excellent sbt-native-packager
This post looks at the first step, creating the .deb
package using the sbt-native-packager
plugin. In the next post, we’ll look at how setup your own debian repository so users can install and upgrade your software via apt-get
to popular Linux distros like Debian and Ubuntu.
Thanks and appreciation to @muuki88 and the project maintainers for sbt-native-manager
, it’s a seriously useful piece of software.
The sbt-native-packager
plugin can create platform specific distributions for any sbt
project. It can create a simple zip bundle for your application (the universal
package), tarballs, rpms, Mac dmgs, Windows MSIs, Docker images and, the one we’re interested in, .deb
files. Debian and derivatives like Ubuntu use this package format with dpkg
and apt
as the standard way to install and manage software. If you want to make it easy for Linux user’s to get hold of your software, this is the way to do it.
The native packager takes care of packaging, the act of putting a list of mappings (source file to install target path) into the chosen package format (zip, rpm, etc.). Each packaging format you use will expect the package specific files in a specific folder location in your source. For example, the Universal plugin will look in src\universal
for files to add to the zip. Debian packaging will look for src\debian
.
You’ll need to read up a little on how to use the plugin, see Debian specific instructions but the core concepts include:
The native packager uses Project Arcetypes like the Java CLI Application Archetype or Java Server Application Archetype to add additional files to the mappings enriching the created package. They don’t provide any new features per se.
One of the many awesome features of the packager is that it can set your application up as a service on the target system. For example, if you want to start your application using systemd
, just add a enablePlugins(SystemdPlugin)
line to your build. The OS will take care of everything else, even restarting your application should it crash.
The native packagers doesn’t provide application lifecyle management however. Custom start/stop scripts, PID management, etc. are not part of native packager.
For some additional nice to haves, you might consider adding man
pages (with ronn
), creating a “fat” JAR (with sbt-assembly
plugin) and stripping unused Scala library classes with ProGuard.
The native packager doesn’t take care of deployment. You use it to create your .deb
package but what you do with it is down to you. The obvious choice is to deploy it to a Debian repository for your users to download via apt
. Read the next post to find out how.
Get up to date.
$ sudo apt-get update && sudo apt-get upgrade -y
Verify nothing is wrong. Verify no errors are reported after each command. Fix as required (you’re on your own here!).
$ dpkg -C
$ apt-mark showhold
apt-get
SourcesUpdate the sources to apt-get
. This replaces “stretch” with “buster” in the repository locations giving apt-get
access to the new version’s binaries.
$ sudo sed -i 's/stretch/buster/g' /etc/apt/sources.list
$ sudo sed -i 's/stretch/buster/g' /etc/apt/sources.list.d/raspi.list
Verify this caught them all by running the following, expecting no output. If the command returns anything having previously run the sed
commands above, it means more files may need tweaking. Run the sed
command for each. The aim is to replace all instances of “stretch”.
$ grep -lnr stretch /etc/apt
Speed up subsequent steps by removing the list change package.
$ sudo apt-get remove apt-listchanges
To update existing packages without updating kernel modules or removing packages, run the following.
$ sudo apt-get update && sudo apt-get upgrade -y
Alternatively, to include kernel modules and removing packages if required, run the following (choose one, not both. See this question for details).
$ sudo apt-get update && sudo apt-get full-upgrade -y
Cleanup old outdated packages.
$ sudo apt-get autoremove -y && sudo apt-get autoclean
Verify with cat /etc/os-release
.
You’ve come this far, might as well get the latest firmware.
$ sudo rpi-update
]]>The first video on the site gives an introduction to refactoring and some examples in Java from chapter 1 of Martin Fowler’s book. The source code is on GitHub.
]]>Get up to date.
$ sudo apt-get update
$ sudo apt-get upgrade
$ sudo apt-get dist-upgrade
Verify nothing is wrong. Verify no errors are reported after each command. Fix as required (you’re on your own here!).
$ dpkg -C
$ apt-mark showhold
Optionally upgrade the firmware.
$ sudo rpi-update
apt-get
Update the sources to apt-get
. This replaces “jessie” with “stretch” in the repository locations giving apt-get
access to the new version’s binaries.
$ sudo sed -i 's/jessie/stretch/g' /etc/apt/sources.list
$ sudo sed -i 's/jessie/stretch/g' /etc/apt/sources.list.d/raspi.list
Verify this caught them all. Run the following, expecting no output. If the command returns anything having previously run the sed
commands above, it means more files may need tweaking. Run the sed
command for each.
$ grep -lnr jessie /etc/apt
Speed up subsequent steps by removing the list change package.
$ sudo apt-get remove apt-listchanges
$ sudo apt-get update && sudo apt-get upgrade -y
$ sudo apt-get dist-upgrade -y
Cleanup old outdated packages.
$ sudo apt-get autoremove -y && sudo apt-get autoclean
Verify with cat /etc/os-release
.
You’ve come this far, might as well get the latest firmware.
$ sudo rpi-update
]]>Download your image and burn to an SD card. You can’t go far wrong with a SanDisk 16GB microSDHC memory cardfor £6.99.
raspi-config
Boot Options
then B1 Console
Modify the /etc/network/interfaces
file to access a network (with hidden SSID).
source-directory /etc/network/interfaces.d
auto lo
iface lo inet loopback
iface eth0 inet dhcp
auto wlan0
allow-hotplug wlan0
iface wlan0 inet dhcp
wpa-scan-ssid 1
wpa-ap-scan 1
wpa-key-mgmt WPA-PSK
wpa-proto RSN WPA
wpa-pairwise CCMP TKIP
wpa-group CCMP TKIP
wpa-ssid "network-name"
wpa-psk 72084.....654
iface default inet dhcp
The 8192cu
based wifi dongles will often go to sleep if they’re not getting any traffic. This means broken terminal sessions and general annoyances. Prevent it happening by adding the following to /etc/modprobe.d/8192cu.conf
. Create the file if it doesn’t already exist (reference).
options 8192cu rtw_power_mgnt=0 rtw_enusbss=0
You can check if you have the 8192cu
module loaded with lsmod
. If you don’t see it in the list, don’t bother!
$ sudo apt-get update
$ sudo apt-get upgrade
$ sudo rpi-update
$ sudo apt-get install build-essential git avahi-daemon libavahi-client-dev oracle-java8-jdk
For Scala development.
$ cd /usr/local/bin
$ sudo wget https://repo.typesafe.com/typesafe/ivy-releases/org.scala-sbt/sbt-launch/0.13.13/sbt-launch.jar
$ sudo chown pi sbt-launch.jar
$ sudo chgrp pi sbt-launch.jar
Create a file /usr/local/bin/sbt
(change the owner and group as above) and paste the following in (take note that the max memory is set to 512 MB for the Pi Zero). Change the owner and group as above.
#!/bin/bash
SBT_OPTS="-Xms512M -Xmx512M"
java $SBT_OPTS -jar `dirname $0`/sbt-launch.jar "$@"
Note that there’s not much memory on the Pi, so limit sbt
s consumption. In this case, the 512MB limit suits the Pi Zero.
Then make it executable.
chmod u+x /usr/local/bin/sbt
Add the following to ~/.bashrc
# some more ls aliases
alias ll='ls -lavh --group-directories-first'
alias la='ls -A'
#alias l='ls -CF'
# Toby's ones
alias path='echo -e ${PATH//:/\\n}'
alias libpath='echo -e ${LD_LIBRARY_PATH//:/\\n}'
alias du='du -kh' # Makes a more readable output.
alias df='df -kTh'
Add the following to /boot/config.txt
(tested on the Pi Zero only).
# disable the activity LED (intended for the Pi Zero)
dtparam=act_led_trigger=none
dtparam=act_led_activelow=on
sudo apt-get install netatalk
vi
setup$ echo 'set nocompatible' > ~/.vimrc
sudo cp /usr/share/zoneinfo/Europe/London /etc/localtime
]]>extends
) polymorphism.
From the Neophytes Guide, Daniel Westheide describes type classes, slightly paraphrased, as follows.
A type class
C
defines behaviour.
TypeT
must support behaviour defined inC
to be a “member” ofC
.
IfT
is a “member”, it isn’t inherent to that type (ifT
hasC
’s behaviour, it isn’t native to that type viaextends
or otherwise).
Instead, anyone can supply implementations ofC
behaviour for typeT
and this infers thatT
is a “member” ofC
.
C
as a traitT
above)C
in a common way (optionally extending “members” like T
with implicit classes)The type class here is
NumberLike
providing abstractplus
,divide
andminus
behaviours.
TypesInt
andDouble
are “members” ofNumberLike
.
Int
andDouble
don’t natively have the behaviors ofNumberLike
.
Instead, the implementations on theNumberLike
object provides them.
Notice the paramaterised type [T]
.
|
Provide some default implementations of your type class trait in its companion object. Usually, these are singletons (object
) but could be val
s. They are always implicit
.
|
The whole point of the pattern is to be able to provide common behaviour to classes without tight coupling or even by modifying them at all. So far, we’ve created specific behaviours for our classes (like plus
above) conforming to our “contract” type class C
.
To call that behaviour, we use Scala’s implicit
semantics to find an appropriate implementation. It binds a concrete type of T
(let’s say Int
) with it’s corresponding type class (NumberLikeInt
). It means we only need one method for all number-like types.
|
So, if an implicit parameter can be found for a given type, Scala will use that implementation. The NumberLikeInt
is used below.
scala> println(Statistics.mean(List[Int](1, 2, 3, 6, 8)))
4
Without an implicit in scope, you’d get an error
Error:(42, 26) could not find implicit value for parameter number: NumberLike[Int]
println(Statistics.mean(Seq(1, 2, 3, 6, 8)))
Another way of writing the generic method is to use context bounds (ie, use T: NumberLike
).
|
As a simple extension, you can extend “member” types directly using an implicit
class. For example, we can add the mean
method to any sequence of NumberLike
s.
|
and call mean
directly.
|
or like this for Double
.
|
A basic “decoder” interface that uses an Either
to return a result as either successful or unsuccessful.
|
We could provide an implementation to decode a string to a valid Colour
type. Unsupported colours produce a “left” result.
|
With an implicit class extending String
, any string value can be decoded to a type A
.
|
Then anywhere you have a string and you want to decode it, just go ahead.
|
Item | Price |
---|---|
Raspberry Pi Zero | £ 4 |
SanDisk 8GB microSDHC memory card | £ 4 |
2.54mm Header strip | £ 0.89 |
DS18B20 1-Wire temperature sensor | £ 1.45 |
1 x 4.7k Ω resistor | £ 0.10 |
Some jumper wires or otherwise recycled wires with connectors | £ 0.97 |
Data logging software | FREE |
Total | £ 11.41 |
Optional extras You might also want to consider a USB Wifi adapter (about £ 6), a case (I like the one from Switched On Components at £ 3.80) and a USB to TTL serial connection for headless setup. Something with a PL2302TA chip in it like this module or the Adafruit console cable.
Connecting the temperature sensor to the Pi is straight forward. There a loads of tutorials on the web but you’re looking to connect the following physical pins on the Pi to the following sensor connectors.
Physical Pi Pin | Description | DS18b20 Connector |
---|---|---|
1 | 3.3v Power | Power (red) |
7 | GPIO 4 | Data (yellow) |
9 | Ground | Ground (black) |
The other thing you’ll need to do is connect the 4.7k Ω resistor between the power and data lines. This acts as a pull-up resistor to ensure that the Pi knows that the data line starts in a “high” state. Without it, it can’t tell if it should start as high or low; it would be left floating.
Make sure you have the following line in your /boot/config.txt
. It will load the GPIO 1-wire driver and any attached temperature sensor should be automatically detected.
dtoverlay=w1-gpio
On the sensor itself, temperature measurements are stored in an area of memory called the “scratchpad”. If everything is connected ok, the contents of the scratchpad will be written to a file under /sys/bus/w1/devices/28-xxx/w1_slave
(where xxx
will be a HEX number unique to your sensor). Here’s an example from my w1_slave
file.
4b 01 4b 46 7f ff 05 10 d8 : crc=d8 YES
4b 01 4b 46 7f ff 05 10 d8 t=20687
The temperature is shown as the t
value; 20.687 °C in this case. The scratchpad allows you to program the sensor as well as read temperature data from it. See the data sheet or the project’s documentation for more details (including how to change the precision of the sensor).
Once you can see the w1_slave
file, you’re ready to install the data logging software.
There are lots of options to record the temperature data but for something a bit different, the temperature-machine software logs temperatures from multiple sensors on multiple Pi’s. It sends the data to a nominated “server” and the server stores it all in a round robin database and serves up the charts via a web page.
Setup apt-get
to recognise the temperature-machine repository and import the public key (increasing security by ensuring only official packages are installed from it).
sudo bash -c 'echo "deb http://robotooling.com/debian stable temperature-machine" >> /etc/apt/sources.list'
sudo apt-key adv --keyserver pool.sks-keyservers.net --recv-keys 00258F48226612AE
Install.
sudo apt-get update
sudo apt-get install temperature-machine
Decide if you will be running the temperature-machine as a server or client.
If you have a single machine, you want a server. If you already have a server running and you’re adding another machine, set it up as a client.
Run the following command (if you see an error about port already in use
, try again until it works).
temperature-machine --init
It will ask you to choose between the server and client and create an appropriate configuration file in ~/.temperature/temperature-machine.cfg
.
If you created a server configuration file above, update the defaulted hosts in ~/.temperature/temperature-machine.cfg
.
The default server configuration will list some default values for hosts
, such as:
hosts = ["garage", "lounge", "study"]
These are the machines you will be using in your final setup. Ensure these match the host names of each machine you plan to add. The values are used to initialise the database on the server. If you need to change this later, you will have to delete the database and losing any historic data, so add in some spares.
The software starts automatically, it runs as a service but after setting up the configuration, you must either restart the service manually (run
sudo systemctl restart temperature-machine
) or wait about a minute and it will restart automatically.
Go to to something like http://10.0.1.55:11900 from your favorite browser. Find your IP address on the Pi with hostname -I
.
The logs can be found in the app or in the ~/.temperature/temperature-machine.log
.
You can add as many client machines as you like and each machine can have up to five sensors attached. For example, add them to rooms and set the hostnames to match the room (just make sure the hostnames match what you put in your server temperature-machine.cfg
file).
Follow the steps above but select client
when you run temperature-machine --init
.
The server broadcasts it’s IP address, so any clients should automatically detect where the server is and start sending data to it.
The 1-wire protocol allows you to chain multiple sensors, so each Pi can have any number of sensors attached. The software automatically supports up to five sensors. Connect them to your Pi and restart and they’ll be automatically detected and included in the charts.
I found soldering a bunch of sensor wires together along with the resistor a bit tricky so I put together a simple PCB to allow me to chain them without soldering.
The software will automatically start as a service, it will even restart after a crash or reboot.
To see if it’s running, run the following.
systemctl status temperature-machine
Look for active (running)
in the output.
● temperature-machine.service - temperature-machine
Loaded: loaded (/lib/systemd/system/temperature-machine.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2018-05-09 19:54:32 UTC; 2 days ago
Main PID: 22980 (java)
CGroup: /system.slice/temperature-machine.service
└─22980 java -Djava.rmi.server.hostname=10.0.1.26 -Xms256m -Xmx512m ...
To stop the service, run the following.
sudo systemctl stop temperature-machine
To restart, run the following (or reboot).
sudo systemctl restart temperature-machine
If you’re monitoring temperatures in a bedroom, you might not want to be disturbed by the LEDs. To switch the Pi Zero LED off, see the Raspberry Pi Forum and Stack Overflow and to switch an Edimax EW-7811 LED off, see my previous post.
Head over to the project’s website at temperature-machine.com for the full docs.
]]>The only way I found to disable the LED is by modifying the kernel module. Compiling that meant recompiling the associated kernel to get all the dependencies lined up.
If you don’t want to have a go at compiling the kernel, you can always download the output of my efforts here (built against 4.9.17-v7+).
You’ll need to know the specific kernel version. Run the following.
$ uname -a
It’ll show something like this
Linux raspberrypi 4.1.13+ #826 PREEMPT Fri Nov 13 20:13:22 GMT 2015 armv6l GNU/Linux
The Edimax uses the 8192cu
module. You can check it’s loaded with lsmod
. You’ll see something like this.
$ lsmod
Module Size Used by
cfg80211 499834 0
rfkill 22491 2 cfg80211
8192cu 569532 0
...
For interest, you can get more information running modinfo 8192cu
.
filename: /lib/modules/4.1.13+/kernel/drivers/net/wireless/rtl8192cu/8192cu.ko
version: v4.0.2_9000.20130911
author: Realtek Semiconductor Corp.
description: Realtek Wireless Lan Driver
license: GPL
srcversion: 133EACDEB0C6BEBC3ECA8D0
vermagic: 4.1.13+ preempt mod_unload modversions ARMv6
...
You need both the module and kernel source.
The latest driver version (v4.0.2_9000
) on the Realtek site isn’t actually the latest version. At least, it’s been modified for the Pi. The good news is that the modified version is bundled with the Pi kernel source at https://github.com/raspberrypi/linux.git. On the Pi, run the following (matching your running kernel version with the --branch
option).
$ git clone --branch=rpi-4.1.y --depth=50 https://github.com/raspberrypi/linux.git
$ ln -s linux linux-$(uname -r)
The git clone
command will download the full source (including headers and all built-in drivers) into a new folder called linux
. The symbolic link is just a handy reminder of what you’ve cloned.
The latest source may not match your running kernel version (uname -r
). You can check in the Makefile
;
VERSION = 4
PATCHLEVEL = 1
SUBLEVEL = 15
...
This is version 4.1.15
whereas my version was 4.1.13
. Major versions are stored as branches in the repository (hence the --branch=rpi-4.1.y
option above) but if like me, your version is a minor level, you have to scan the commits from the appropriate branch. For example, 4.1.13 and 4.1.12 were documented by Greg Kroah-Hartman in the commit messages. You could also try something git log --oneline | grep "Linux 4.1.18"
to save manually scanning the logs.
The upshot is that you may need to roll back to the revision that is specifically for your kernel version. That’s why I used --depth=50
in the hope of catching the revision I’m interested in.
$ cd linux
$ git checkout 1f2ce4a2 # the SHA of your specific version, this is 4.1.13
Compiling anything in Linux usually requires you have the kernel header files available. The usual way to get these is to run apt-get install linux-headers-$(uname -r)
but the maintainers for the Raspberry Pi linux distribution don’t make them available like this. Instead, we have to rely on the full kernel source you’ve just downloaded.
Create a symbolic link to fill in for the missing build
folder in /lib/modules
before you try and compile the driver:
$ cd ..
$ ln -s linux /lib/modules/$(uname -r)/build
This creates the missing folder but points at the newly downloaded source. It’s what fixes the infamous error;
make[1]: *** /lib/modules/4.1.13+/build: No such file or directory
Before we build the kernel, we need to create a .config
file containing the current kernel configuration. The current config should be in the /proc/config.gz
file on the Pi. If the file doesn’t exist, run sudo modprobe configs
and check again.
$ cd linux
$ zcat /proc/config.gz > .config
This isn’t as scary as it sounds. We need to compile the kernel source. We’re not going to install it, but we do want to create various dependencies that are needed to compile the driver. For example, compiling the driver would fail with missing files like include/generated/autoconf.h
or include/config/auto.conf
. Compiling the entire kernel is probably a bit overkill but I’ve found it easier than chasing down individual errors.
Before compiling the kernel, get some extra dependencies
$ sudo apt-get install build-essential
$ sudo apt-get install libncurses5-dev # required for menuconfig
$ sudo apt-get install bc # required for timeconst.h
You can have a go at running just make
from the linux
folder at this point but various options need to be set and it’s probably easier to use menuconfig
. Make sure you created the .config
from earlier then run the following.
$ cd linux
$ make menuconfig
Scan the options but as they’re based on your current settings (via .config
), you should just be able to quit (ESC
, ESC
) and something like the following will be output.
HOSTCC scripts/kconfig/mconf.o
HOSTCC scripts/kconfig/zconf.tab.o
HOSTCC scripts/kconfig/lxdialog/checklist.o
HOSTCC scripts/kconfig/lxdialog/util.o
HOSTCC scripts/kconfig/lxdialog/inputbox.o
HOSTCC scripts/kconfig/lxdialog/textbox.o
HOSTCC scripts/kconfig/lxdialog/yesno.o
HOSTCC scripts/kconfig/lxdialog/menubox.o
HOSTLD scripts/kconfig/mconf
scripts/kconfig/mconf Kconfig
configuration written to .config
*** End of the configuration.
*** Execute 'make' to start the build or try 'make help'.
The last remaining config files will have now been created, so you can do the actual build with;
make ARCH=arm
or as Avi P points out below if you’re running on a multi-core Pi like the Pi 2 or Pi 3;
make -j4 ARCH=arm
This takes a while; on my Pi Zero, over 12 hours. There’s always the option to cross compile if you’re in a hurry.
For extra background, I found an interesting guide on Stack Exchange about Configuring, Compiling and Installing Kernels (although we’re not going as far as installing the built kernel here).
This is the step that actually disables the LED on the dongle.
Locate the autoconf.h
file in the drivers folder (linux/drivers/net/wireless/rtl8192cu/include
or linux/drivers/net/wireless/realtek/rtl8192cu/include
in newer linux versions) and comment out the CONFIG_LED
macro definition. It should look like this when you’re done.
// #define CONFIG_LED // <-- comment this line out to disable LED
#ifdef CONFIG_LED
#define CONFIG_SW_LED
#ifdef CONFIG_SW_LED
//#define CONFIG_LED_HANDLED_BY_CMD_THREAD
#endif
#endif // CONFIG_LED
The dependencies should all be available now, so you’re ready to compile the driver. Compile from the location of driver source (probably linux/drivers/net/wireless/rtl8192cu
or linux/drivers/net/wireless/realtek/rtl8192cu
).
$ cd linux/drivers/net/wireless/realtek/rtl8192cu
$ make ARCH=arm
Again, use the -j4
flag if you’re on a big boy Pi.
Once it’s compiled, remove the old driver with sudo rmmod 8192cu
and from the driver folder, manually startup the newly compiled one; sudo insmod 8192cu.ko
. Note that you’ll loose network connectivity after removing the old module. Make sure you’ve got a way to connect back to your Pi (like a console cable).
Running modinfo 8192cu
doesn’t help verify the new driver as none of the meta-data has changed but you can check the datestamp of the .ko
and you should see that there’s no LED flashing.
To keep the change, I renamed the patched module to 8192cu-no-led.ko
and copied it into the Pi’s main kernel drivers folder. I renamed the original driver to 8192cu-original.ko
and created a symbolic link for the true module name 8192cu.ko
. This is because I want to be able to swtich back easily and not have to modify any additional configuration (for example, any /etc/modprobe.d/8219cu.conf
settings) or black lists.
$ mv 8192cu.ko 8192cu-no-led.ko
$ sudo cp 8192cu-no-led.ko /lib/modules/4.1.13+/kernel/drivers/net/wireless/rtl8192cu/
$ cd /lib/modules/4.1.13+/kernel/drivers/net/wireless/rtl8192cu/
$ sudo mv 8192cu.ko 8192cu-original.ko
$ sudo ln -s 8192cu-no-led.ko 8192cu.ko
You should see something like this.
$ ll
total 1332
lrwxrwxrwx 1 root root 13 Jan 12 17:58 8192cu.ko -> 8192cu-no-led.ko
-rw-r--r-- 1 root root 672500 Jan 12 17:58 8192cu-no-led.ko
-rw-r--r-- 1 root root 686160 Nov 18 16:01 8192cu-original.ko
You can enable the original driver by reassigning the symbolic link.
armv6l
folder$ cd linux/arch
$ sudo ln -s arm armv6l
or always run the following when compiling
make ARCH=arm
build
folder$ make[1]: *** /lib/modules/4.9.58+/build: No such file or directory. Stop.
Create a sym
$ sudo ln -s /home/pi/code/linux build
The folder should look something like this.
$ ll
total 1.9M
drwxr-xr-x 11 root root 4.0K Oct 27 18:52 kernel
lrwxrwxrwx 1 root root 19 Oct 29 11:33 build -> /home/pi/code/linux
-rw-r--r-- 1 root root 471K Oct 27 18:53 modules.alias
-rw-r--r-- 1 root root 486K Oct 27 18:53 modules.alias.bin
-rw-r--r-- 1 root root 4.7K Oct 27 18:52 modules.builtin
...
-rw-r--r-- 1 root root 249K Oct 27 18:53 modules.symbols.bin
Setup some config in the modprobe.d
folder.
$ cd /etc/modprobe.d/
$ ll
total 16
-rw-r--r-- 1 root root 73 Jan 1 19:45 8192cu.conf
$ cat 8192cu.conf
options 8192cu rtw_power_mgnt=0 rtw_enusbss=0
]]>The Adafruit Console Lead uses the PL2303TA (a USB-to-serial/parallel converter chip) to talk to the Pi over GPIO pins 8 and 10 via USB. You can use this kind of USB to serial communication on plenty of devices but with the Pi, it’s handy to use the screen
application to effectively open a “telnet-like” terminal to your Pi.
Most of the guides on the internet point you to older versions of the drivers, but to get things working on the Mac with El Capitan or Sierra, I’ve found v1.6 of the Prolific driver is the only working option.
It’s not hard to build you’re own cable from the basic components or you could try eBay for parts for under £2/$2 but I can’t say if they use genuine Prolific chips or counterfeit.
$ ls -la /dev/tty.usb*
crw-rw-rw- 1 root wheel 17, 4 28 Dec 19:49 /dev/tty.usbserial
Startup screen and point it to your Pi (115200
is the baud rate to communicate with).
screen /dev/tty.usbserial 115200
You might need to hit the enter
key to wake things up, but you should see a regular Linux login prompt.
By default, the console width is 30 characters and wraps on a single line. It’s pretty annoying when you paste a long command, so you can increase it for you session with the following
stty cols 130
When you fire up the screen
window manager, you can use Ctrl
+ A
( + A
) to enter “command mode”, hitting a subsequent key will execute a command. For example, Ctrl
+ A
followed by a ?
will show you some helpful commands.
Here are some reminders of useful commands.
Command(s) | Description |
---|---|
Get some help | Ctrl + A, ? |
Kill a session | Ctrl + A, Ctrl + \ |
You might want to setup you’re wireless from within screen
. Connecting to a non-hidden network is straight forward. Setting things up for a hidden network is a little more involved.
Check you /etc/network/interfaces
file and ensure it has a wlan0
section. For open networks, something like this.
source-directory /etc/network/interfaces.d
auto lo
iface lo inet loopback
iface eth0 inet dhcp
allow-hotplug wlan0
iface wlan0 inet dhcp
wpa-ssid "Guest Network"
wpa-psk "passphrase"
and for hidden networks, something like this.
Modify the /etc/network/interfaces
file to access a network (with hidden SSID).
source-directory /etc/network/interfaces.d
auto lo
iface lo inet loopback
iface eth0 inet dhcp
auto wlan0
allow-hotplug wlan0
iface wlan0 inet dhcp
wpa-scan-ssid 1
wpa-ap-scan 1
wpa-key-mgmt WPA-PSK
wpa-proto RSN WPA
wpa-pairwise CCMP TKIP
wpa-group CCMP TKIP
wpa-ssid "network-name"
wpa-psk 12484.....654
iface default inet dhcp
Using the console lead is an easy way to use a telnet-like terminal to setup your Pi when you don’t want to connect a monitor and keyboard. As an alternative, you could try a headless setup mounting an SD card, whereby you’d mount an SD Card (with a raspian image) onto a unix-like machine and modify the file system directly.
See my post on common Pi setup for more notes and tips on general Pi setup.
]]>I’m left wondering why we still use “pair tests” for recruitment. Is it to see how candidates problem solve? How they’d be to work with? The only way to assess these things is actually to do them. Pair tests are a poor simulation. If you want to see how someone works, work with them. Don’t pretend to work with them.
If the goal is to see how candidates pair, a pair test in an interview context is just the wrong way to do it. You’re already setting things up as a test and however hard you try, it won’t be a realistic simulation. You already know the answers, so despite saying “let’s try and pair on a problem and work out the solution together”, you’re already coming to the table with huge preconceptions and are poised for judgement.
The candidate has a right to understand what it might be like to work at your company and a pair test gives them very little information. As a candidate I’m terribly nervous, I’m full of doubt and questions about what the interviewers really want to see.
Candidates are often hyper-sensitive to the interviewer’s comments so when an interviewer asks what she considers an innocuous question about the exercise, it’s all to easy for the candidate to freak out. For example;
Interviewer: "So why have you used a `val` there and not a `def`."
Candidates inner voice: "Because that's how I like to do it... hang on, I can't say that.
They know something I don't know. What is it? WHAT IS IT? Oh, my word,
they think I'm an idiot. I *am* an idiot."
It just comes with the territory. You’re at interview. There is a very clear and very deep seated notion of employer / employee deference at play. Candidates are often expected to want to take a job, even when they know very little about it. Remember, any interview should be a two way process. Am I right for the role and is the role right for me. It should be a partnership.
During my last round of recruitment using pair testing, I made a huge effort to put candidates at ease. We asked candidates to bring code with them to extend and made it clear there is no right or wrong answer. Yet, without exception, all of them showed signs of stress, panicked and basically got them selves into a muddle.
When you ask a candidate to solve a problem, however trivial you might think it is, you’re basically asking them to come up with a solution in thirty seconds. That’s not how I work in my day job. I’ll chew over a problem, try one or two things, roll back, have another go. I think about what I’m trying to do and that might take a minute. I don’t hack the first thing that comes into my head.
As an example, I did a pair test with a TV broadcasting company recently. They asked me to solve the Shopping Basket Problem. I floundered when I was put was on the spot but had another go later. At home, I spent an hour or so and came up with something I was really pleased with, including an elegant way to mixin offers to shopping items.
case object Banana extends Fruit(51) with ThreeForTwo
case object Apple extends Fruit(12) with TwoForOne
case object Pineapple extends Fruit(95)
There is often no clear yes or no result to a pair test. Because of the stressful nature, you can’t be sure you’re getting the best out of candidates. People often don’t take this into account and assume the simulation was realistic. Best case scenario: someone flies through the exercise and gets everything “right”. Are they an easy hire? Is there more to them?
In my recent round of recruitment, we offered four roles and each of them basically messed up the pair test. Each of them panicked or went off in a weird direction in a moment of stress. There’s no coming back for the candidate when that happens. I’m hugely proud that we understood things well enough to still make offers, after what might have been seen as a “fail”.
Rather than simulate a working environment, create a real working environment. It’s not always practical to ask a candidate to work with you for a week on your projects but you can invite candidates to an assessment day or hackathon.
Batch your candidates into cohorts and ask them to pair with each other whilst you observe. Solve a sensible but fun problem over the course of an entire day and get them to explain their choices and experiences.
Rotate the pairs and get everyone on your dev team involved. Get them to walk around the room to observe candidates. As you have the time, you can mix in more traditional one-to-one interviews towards the end of the day. Oh, and don’t forget to provide lunch.
It is really hard to be objective in assessing candidates using pair testing. Partly because this kind of assessment is inherently subjective. Pair testing doesn’t lend itself to critical thinking as there is no clear critique. Trying to identify objective criteria for code is a waste of your time; “quality” is not quantitative.
Candidates and interviewers come to pair testing with expectations and bias, knowingly or otherwise (search for framing, anchoring and cognitive bias). It means the thing isn’t fair from the start. You might get lucky and it’ll work out but from my experience, you’ll get more false negatives than positive outcomes.
]]>JAVA_HOME
environment variable was always a pain. Then I discovered the handy java_home
command.
/usr/libexec/java_home -V
It shows the Java versions are available and where there are. For example, on my machine, the output looks like this.
Matching Java Virtual Machines (4):
1.8.0, x86_64: "Java SE 8" /Library/Java/JavaVirtualMachines/jdk1.8.0.jdk/Contents/Home
1.7.0_25, x86_64: "Java SE 7" /Library/Java/JavaVirtualMachines/jdk1.7.0_25.jdk/Contents/Home
1.6.0_65-b14-466.1, x86_64: "Java SE 6" /System/Library/Java/JavaVirtualMachines/1.6.0.jdk/Contents/Home
1.6.0_65-b14-466.1, i386: "Java SE 6" /System/Library/Java/JavaVirtualMachines/1.6.0.jdk/Contents/Home
If you use the alternative option -v <version>
, you get the path to a specific version.
/usr/libexec/java_home -v 1.8
It shows my 1.8 version lives at /Library/Java/JavaVirtualMachines/jdk1.8.0.jdk/Contents/Home
.
To switch between versions, I can just run the following.
export JAVA_HOME=`/usr/libexec/java_home -v 1.8`
You could also add an alias to switch to a specific version;
alias setjava8='export JAVA_HOME=`/usr/libexec/java_home -v 1.8`'
If you want to do something more sophisticated, you could try something like this or this.
Caveat: I don’t have Java on my path, if you do, switching versions using the environment variable may not work.
]]>implicit
. In this post, we’ll take a look at implicit functions and how they can be useful to convert things of one type to things of another.
Implicit functions will be called automatically if the compiler thinks it’s a good idea to do so. What that means is that if your code doesn’t compile but would, if a call was made to an implicit function, Scala will call that function to make it compile. They’re typically used to create implicit conversion functions; single argument functions to automatically convert from one type to another.
For example, the following function allows you to convert a Scala function into a instance of the Java 8 Consumer
single argument method but still use Scala’s concise syntax.
|
You can avoid having to write clunky anonymous class instantiation when interfacing with Java and so mimic Java’s lambda syntax. So rather than having to use the longhand version like this.
|
You can write this, where we just pass a Scala function to Java’s forEach
method.
|
The argument to forEach
is actually a function of type Element => Unit
. Scala recognises that the toConsumer
method could convert this into a Consumer[Element]
and does so implicitly.
|
Which is basically short-hand for this.
|
If we have a button on we web page that we’d like to find using Web Driver, we’d normally write something like the following, using a “locator” to locate it by id
attribute.
|
It doesn’t take into account that the element might not be there when we call it (for example, when our UI uses ajax and adds the button asynchronously) and it’s also a bit verbose. We can use an implicit function to address both of these issues.
The fragment below uses the WebDriverWait
class to wait for a UI element to appear on the screen (using findElement
to check and retrying if necessary) and so smooths out the asynchronous issues.
|
It’s also an implicit function designed to convert a By
locator into a WebElement
. It means we can write something like the following where button
is no longer a WebElement
, but a By
.
|
Without the implicit waitForElement
function, the code wouldn’t compile; By
doesn’t have a click
method on it. With the implicit function in scope however, the compiler works out that calling it (and passing in create
as the argument), would return something that does have the click
method and would compile.
Now there’s one little bit I’ve brushed over here; namely how the WebDriver
driver
instance is made available. The example above assumes it’s available but it’d be nicer to pass it into the function along with locator
. However, there’s a restriction of passing only a single argument into an implicit function. The answer is to use a second argument (using Scala’s built in currying support). By combining implicit parameters the we saw in the previous post, we can maintain the elegant API.
|
So the full example would look like this; making driver
an implicit val
means we can avoid a call to button.click()(driver)
.
|
You can see from the examples above that implicit functions (and often combining them with implicit values) can make for succinct and more readable APIs. Next we’ll look at implicit classes.
If you’re interested in more Java bridge implicits like toConsumer
, check out this gist.
Int
to a String
and rather than call that function explicitly, you can ask the compiler to do it for you, implicitly.
In the next few posts, we’ll look at the different types of implicit bindings Scala offers and show some examples of when they can be useful.
There are three categories of “implicits”;
implicit
def
s that will be called automatically if the code wouldn’t otherwise compileAt it’s simplest, an implicit parameter is just a function parameter annotated with the implicit
keyword. It means that if no value is supplied when called, the compiler will look for an implicit value and pass it in for you.
|
You tell the compiler what it can pass in implicitly but annotating values with implicit
|
and call the function like this
|
The compiler knows to convert this into a call to multiply(multiplier)
. If you forget to define an implicit var
, you’ll get an error like the following.
error: could not find implicit value for parameter by: Int
multiply
^
val
, var
or def
You can ask the compiler to call your function with an implicit val
(like we’ve just seen), a var
or even another def
. So, we could have written a function that returns an Int
and Scala would attempt to use that instead.
|
The compiler would try to resolve this as multiply(f())
.
However, you can’t have more than one in scope. So if we have both the multipler
value and f
function defined as implicit and call multiply
, we’d get the following error.
error: ambiguous implicit values:
both value multiplier of type => Int
and method f of type => Int
match expected type Int
multiply
^
You can only use implicit
once in a parameter list and all parameters following it will be implicit. For example;
|
As an example, the test below uses Web Driver (and specifically an instance of the WebDriver
class) to check that a button is visible on screen. The beVisible
method creates a Matcher
that will check this for us but rather than pass in the driver
instance explicitly, it uses an implicit val
to do so.
|
Implicit parameters are useful for removing boiler plate parameter passing and can make your code more readable. So if you find yourself passing the same value several times in quick succession, they can help hide the duplication.
The Scala library often use them to define default implementations that are “just available”. When you come to need a custom implementation, you can pass one in explicitly or use your own implicit value. A good example here is the sorted
method on SeqLike
class.
The really useful stuff though comes when we combine implicit parameters with the other types of “implicits”. Read more in the series to build up a picture.
]]>
|
|
|
|
Notes:
expects()
is required for zero argument method call expectations.once
; it will default to the same behaviour
|
|
Notes:
.anyNumberOfTimes
after the returning
call but it’s unnecessary.JMock will return a default value (as a dynamic proxy) if you set up an expectation but leave off a returnValue
. In the example below, we don’t care if it returns anything so if the code under test relies on a value, but the test does not, we don’t have to express anything in the test.
|
If the underlying code were to check, say, that the result of factory.create()
was not an empty list with if (result.isEmpty())
, JMock would return something sensible and we’d avoid a NullPointerException
. You might argue that this side affect should be captured in a test but leaving it off makes the intention of expectation clearer; we only care that create
is called, not what it returns.
Scalamock will return null
by default. So the above example would give a NullPointerException
and you’re required to do something like this. Notice we’re using a stub
and not a mock
here.
|
JMock uses with
and Hamcrest the matcher IsAnything
(any
) to match anything. The type is used by the compiler.
|
In the Scala version, use a type ascription to give the compiler a hand in the partially applied method call;
|
Notes:
AnotherException
is a subtype of SomeException
but any
will match on literally anything. Using subtypes like this in JMock is a bit of a smell as a test won’t fail if a different subtype is thrown at runtime. It may be useful to express intent.(factory.notifyObservers(_: AnotherException))
doesn’t compile.
|
|
Notes:
throws
and throwing
are interchangeable.once
is optional.Taken from my Pluralsight course, the chart shows experience (or time) along the x
axis and some measure of “learning” on the y
.
When you first start, you can expect getting up to speed with the language to be a fairly steep incline. That’s not to say that it’s difficult to get to the first plateau, so by “steep”, I really mean “short”; you can expect a relatively quick increment in learning.
You’ll probably sit here for a bit applying what you’ve learnt. I see this as the first milestone; to be able to build object-oriented or imperative applications using language specific constructs and features but without necessarily adopting functional programming. Just like learning any other language in the Java / C family.
I see the next milestone as adopting functional programming techniques.
This is much more challenging and likely to be a shallower curve. Typically this will involve using traditional architecture design but implementing functional programming techniques in the small. You can think of this approach as “functional in the small, OO in the large”. Starting to embrace a new functional way of thinking and unlearning some of the traditional techniques can be hard, hence the shallower incline.
Concrete examples here are more than just language syntax, so things like higher order and pure functions, referential transparency, immutability and side-affect free, more declarative coding; all the things that are typically offered by pure functional languages. The key thing here is that they’re applied in small, isolated areas.
The next challenge is working towards a more cohesive functional design; this really means adopting a functional style at a system level; architecting the entire application as functions and abandoning the object-oriented style completely. So, aiming for something like a Haskell application.
All the concrete functional programming mechanisms above apply but this time, throughout the system; not to isolated areas but lifted to application-wide concerns. Picking up advanced libraries like Scalaz seems to go hand-in-hand with this point of the curve.
You can also think of adoption as more of a continuum with traditional imperative programming on the left and pure functionally programming on the right.
You can think of the far right as Haskell on the JVM. Haskell is a pure functional language so you don’t have any choice but to design your app in a functional way. Scala is an object-oriented / functional hybrid, it can only give you the tools. It can’t enforce functional programming; you need discipline and experience in Scala to avoid mutating state for example. Haskell will physically stop you.
So as you start out on the continuum using Java and move to the right, libraries like Functional Java, Totally Lazy and even Java 8 features help you adopt a more functional style. There comes a point where a language switch helps even more. Functional idioms become a language feature rather than a library feature. The syntactical sugar of for-comprehensions are a good example.
Libraries like Scalaz make it easier to a develop a purely functional style. It’s worth noting that reaching the far right of the continuum (or top right quadrant of the learning curve) doesn’t have to be the goal. There are plenty of teams operating effectively across the continuum.
When you’re adopting Scala, make a deliberate decision about where you want to be on the continuum, be clear about why and use my learning curve as a way to gauge your progress.
I’ve developed a video course exclusively for Pluralsight to help Java teams make the transition to Scala. If you’re interested and liked this post, check it out or read my book Learn Scala for Java Developers.
]]>It can be tricky not break the inheritance vs. composition principle when using traits with behaviour. Is it clear to you when you might be?
Odersky calls traits with behaviour “mixin traits”. To be a genuine mixin trait, it should be used to mixin behaviour and not just something you inherit from. But what’s the difference? Let’s look at an example.
Let’s say that you have a repository style class who’s API talks about business operations, a Customers
class for example. You might have an database backed version and you don’t want anything going behind your back and messing with the data; everything in production code should go through your business API.
|
Now let’s say that you want a test fixture to allow you to quickly setup test data in your Customers
without having to go through the production API. You can provide an implementation to a trait and collect some data together like this;
|
This says that extending classes must provide a value for customers
. It implements some coarse grained test setup against customers
. So when writing a test, it’s easy to just extend the trait and slot in an implementation of customers
. For example an InMemoryCustomers
or an Oracle implementation that by-passes any constraint checking the proper API might enforce.
|
But we’re saying here that an OracleCustomerTest
is a BackdoorCustomers
. That doesn’t even make sense. There’s no strong notion of a BackdoorCustomers
; it’s not a meaningful noun. Best case scenario, you’re upfront about the fact that it’s a fixture and rename BackdoorCustomers
to CustomersTestFixture
but even then, the test is not a fixture, the two are independent. One is test apparatus that supports the test, the other is the test or experiment itself.
It’s tempting to use traits like this under the pretense of “mixing in” behaviour but you’re really inheriting behaviour from something (that in our case) isn’t related. You’re precluding any type of substitution or inclusion polymorphism. Now arguably, substitution isn’t of great value in test code like this but it’s still a laudable goal.
Using inheritance to mixin behaviour contradicts the inheritance vs. composition principle. So just when is a trait with behaviour a genuine mixin? The trick is in how we mix it in. Before, we made the types inherit the trait but we could have mixed the trait into a specific instance.
For example, we can rework our trait to be a self type.
|
It now enforces implementers to also be a sub-type of Customers
. This, in turn, forces us to rewrite the test
|
So now our test is not inheriting an orthogonal type. From an object-oriented perspective, it’s much cleaner. We use composition to give the test a customers
instance but this time, we treat it as two things. The actual type of the thing is;
|
So all the backdoor methods work along with the API methods but now we can clearer about which is which. For example,
|
Scala is both an object-oriented language and a functional language. So unless your team is entirely behind doing things functionally, you’re still going to come across object-oriented thinking and principles. Traits that have behaviour make it awkward because functionally-thinking, you could argue that nouns aren’t important and behaviour in traits is just behaviour. So why not extend that behaviour by whatever means (including inheritance)?
Because Scala has objects you can’t really just ignore object-oriented semantics and thinking. Not unless, like I say, the entire team buy into functional only code. If that were the case, then reusable behaviour should really be represented as functions on Scala singleton objects and not traits. You’d be forced to use composition anyway.
By that logic, it feels like extending traits for re-use in a functional programming context is just lazy. Mixing behaviour “the right way” seems much less contentious.
]]>
Pure functional languages often discourage the use of exceptions because when they are used to control execution flow, they introduce side-affects and violate purity of function. By using the type system to capture exceptional behaviour and dealing with exceptions monadically, it’s much easier to provide that system wide consistently I’ve been talking about.
The norm for object oriented code is to use exceptions to control execution flow. When you have a method that can return true
or false
and throw an exception, it might as well be returning three things. It forces clients to have to reason about logic that has nothing to do with the function of the method. It’s complicated and often makes it hard to treat exceptions consistently across the entire application.
So what can we learn from functional programing languages? Exceptions are a fact of life, unexpected things can happen with your code and you still need to deal with them. The subtlety here is that functional languages emphasize the unexpected part with exceptions. They try and discourage you from using exceptions for dealing with known branches of logic and instead use them like Java uses Error
s (ie as non-recoverable). This means thinking of exceptions of exceptional behaviour and not Java’s notion of checked Exceptions
.
So how do languages like Scala discourage you using them like Java? They usually offer alternative mechanisms. Scala for example has the Either
and Try
classes. These classes allow you to express using the type system, that a method was successful or unsuccessful, independently from the return value. As an additional bonus, because they are monadic, you can deal with exceptional and expected behaviour consistently in code. That means you can use the same structures to process the positive and the negative case without resorting to catch
blocks.
For example, let’s say we have a method uploadExpenses
that uploads this months expenses to my online accountant’s web service. It uploads a single expense at a time, so it could fail because of some network problem or if the web service rejects an individual Expense
. Once done, I’d like to produce a report (just using System.out
in our example).
In a traditional exception throwing version below, the uploadExpenses
call can break after only some expenses have been uploaded. With no report, it would be hard to work out which were successfully uploaded. You’re also left to deal with the exceptions. If other code depends on this, it may make sense to propagate the exception to an appropriate system boundary but dealing with exceptions consistently for the entire system is a real challenge.
|
On the other hand, if we use an Either
we can make the uploadExpenses
call return either a successfully upload Expense
or a tuple detailing the expense that failed to upload along with the reason why. Once we have a list of these, we can process them in the same way to produce our report. The neat thing here is that the exceptional behaviour is encoded in the return type; clients know that this thing could fail and can deal with it without coding alternative logic.
|
In this way, having the semantics baked into the return types is what forces clients to deal with the exceptional behaviour. Dealing with them monadically ensures that we can deal with them consistently. For a naive implementation, have a look at my gist and for fuller implementations, see Scala’s version or the TotallyLazy and Functional Java versions in Java.
]]>So, a typical implementation of an anonymous class (a single method interface) in Java pre-8, might look something like this. The anonymousClass
method is calling the waitFor
method passing in some implementation of Condition
, in this case it’s saying wait for some server to have shutdown.
|
The functionally equivalent lambda would look like this.
|
Where in the interest of completeness, a naive polling waitFor
method might look like this.
|
Firstly, both implementations are in-fact closures, the later is also a lambda. Confused, see my distinction between lambdas and closures. This means that both have to capture their “environment” at runtime. In Java pre-8, this means copying the things the closure needs into an instance of an class (an anonymous instances of Condition
). In our example, the server
variable.
As it’s a copy, it has to be declared final to ensure that it can not be changed between when it’s captured and when it’s used. These two points in time could be very different given that closures are often used to defer execution until some later point (see lazy evaluation for example). Java 8 uses a neat trick whereby if it can reason that a variable is never updated, it might as well be final so it treats it as “effectively final” and you don’t need to declare it as final
explicitly.
A lambda on the other hand, doesn’t need to copy it’s environment or capture any terms. This means it can be treated as a genuine function and not an instance of a class. What’s the difference? Plenty.
For a start, functions; genuine functions, don’t need to be instantiated many times. I’m not sure if instantiation is even the right word to use when talking about allocating memory and loading a chunk of machine code as a function. The point is, once it’s available, it can be re-used, it’s idempotent in nature as it retains no state. Static class methods are the closest thing Java has to functions.
For Java, this means that a lambda need not be instantiated every time it’s evaluated which is a big deal. Unlike instantiating an anonymous class, the memory impact should be minimal.
In terms of some conceptual differences then;
Another difference is around capture semantics for this
. In an anonymous class, this
refers to the instance of the anonymous class. For example, Foo$InnerClass
and not Foo
. That’s why you have whacky syntax like Foo.this.x
when you refer to the enclosing scope from the anonymous class.
In lambdas on the other hand, this
refers to the enclosing scope (Foo
directly in our example). In fact, lambdas are entirely lexically scoped, meaning they don’t inherit any names from a super type or introduce a new level of scoping at all; you can directly access fields, methods and local variables from the enclosing scope.
For example, this class shows that the lambda can reference the firstName
variable directly.
|
The anonymous class equivalent would need to explicitly refer to firstName
from the enclosing scope.
|
Shadowing also becomes much more straight forward to reason about (when referencing shadowed variables).
The other thing to note is the byte code an anonymous class implementation produces compared to the lambda byte-code. The former will use the invokespecial
whereas a lambda uses invokedynamic
. The difference is about when the caller is linked to a destination; lambdas are matched at runtime (invokedynamic
) rather than compile time (invokespecial
and invokevirtual
).
This may not seem like a big deal but the main take-away is that these instructions can be optimised by the JVM. We can expect dynamic invocations (and so lambdas) to out-perform their more traditional counterparts.
The invokedynamic
instruction was originally motivated by supporting more dynamic languages on the JVM. With it, you don’t need to know the types ahead of time (statically typed) and you can relax these constraints and support dynamically typed languages (like JavaScript). However, it can be used to do so much more.
It links into type inference and target typing with Java 8, supporting method references (method handles), default methods, removing the need to create intermediary anonymous instances, avoid bridge methods as well as optimisation opportunities. It’s introduction in Java 7 was under the radar for the mainstream but it’s probably the biggest proponent to supporting Java 8 features like lambdas. It’s the mechanism by which Java achieves no additional class loading when using lambdas.
So there we have it. Functions in the academic sense are very different things from anonymous classes (which we often treat like functions in Java pre-8). I find it useful to keep the distinctions in my head as I feel that I need to be able to justify the use of Java 8 lambdas in my code with more than just arguing for their concise syntax. Of course, there’s lots of additional advantages in using lambdas (not least the retrofit of the JDK to heavily use them), but I want to be able to respond when people say “isn’t that just syntactic sugar over anonymous classes?”.
]]>